Review Index:

The NVIDIA Titan X (Pascal) 12GB Graphics Card Review

Manufacturer: NVIDIA

A Beautiful Graphics Card

As a surprise to nearly everyone, on July 21st NVIDIA announced the existence of the new Titan X graphics cards, which are based on the brand new GP102 Pascal GPU. Though it shares a name, for some unexplained reason, with the Maxwell-based Titan X graphics card launched in March of 2015, this is card is a significant performance upgrade. Using the largest consumer-facing Pascal GPU to date (with only the GP100 used in the Tesla P100 exceeding it), the new Titan X is going to be a very expensive, and very fast gaming card.

As has been the case since the introduction of the Titan brand, NVIDIA claims that this card is for gamers that want the very best in graphics hardware as well as for developers and need an ultra-powerful GPGPU device. GP102 does not integrate improved FP64 / double precision compute cores, so we are basically looking at an upgraded and improved GP104 Pascal chip. That’s nothing to sneeze at, of course, and you can see in the specifications below that we expect (and can now show you) Titan X (Pascal) is a gaming monster.

  Titan X (Pascal) GTX 1080 GTX 980 Ti TITAN X GTX 980 R9 Fury X R9 Fury R9 Nano R9 390X
GPU GP102 GP104 GM200 GM200 GM204 Fiji XT Fiji Pro Fiji XT Hawaii XT
GPU Cores 3584 2560 2816 3072 2048 4096 3584 4096 2816
Rated Clock 1417 MHz 1607 MHz 1000 MHz 1000 MHz 1126 MHz 1050 MHz 1000 MHz up to 1000 MHz 1050 MHz
Texture Units 224 160 176 192 128 256 224 256 176
ROP Units 96 64 96 96 64 64 64 64 64
Memory 12GB 8GB 6GB 12GB 4GB 4GB 4GB 4GB 8GB
Memory Clock 10000 MHz 10000 MHz 7000 MHz 7000 MHz 7000 MHz 500 MHz 500 MHz 500 MHz 6000 MHz
Memory Interface 384-bit G5X 256-bit G5X 384-bit 384-bit 256-bit 4096-bit (HBM) 4096-bit (HBM) 4096-bit (HBM) 512-bit
Memory Bandwidth 480 GB/s 320 GB/s 336 GB/s 336 GB/s 224 GB/s 512 GB/s 512 GB/s 512 GB/s 320 GB/s
TDP 250 watts 180 watts 250 watts 250 watts 165 watts 275 watts 275 watts 175 watts 275 watts
Peak Compute 11.0 TFLOPS 8.2 TFLOPS 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 8.60 TFLOPS 7.20 TFLOPS 8.19 TFLOPS 5.63 TFLOPS
Transistor Count 11.0B 7.2B 8.0B 8.0B 5.2B 8.9B 8.9B 8.9B 6.2B
Process Tech 16nm 16nm 28nm 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $1,200 $599 $649 $999 $499 $649 $549 $499 $329

GP102 features 40% more CUDA cores than the GP104 at slightly lower clock speeds. The rated 11 TFLOPS of single precision compute of the new Titan X is 34% higher than that of the GeForce GTX 1080 and I would expect gaming performance to scale in line with that difference.

Titan X (Pascal) does not utilize the full GP102 GPU; the recently announced Pascal P6000 does, however, which gives it a CUDA core count of 3,840 (256 more than Titan X).

View Full Size

A full GP102 GPU

The complete GPU effectively loses 7% of its compute capability with the new Titan X, although that is likely to help increase available clock headroom and yield.

The new Titan X will feature 12GB of GDDR5X memory, not HBM as the GP100 chip has, so this is clearly a unique chip with a new memory interface. NVIDIA claims it has 480 GB/s of bandwidth on a 384-bit memory controller interface running at the same 10 Gbps as the GTX 1080.

Continue reading our review of the new NVIDIA Titan X (Pascal) Graphics Card!!

Other than these changes, and corresponding improvements in texture units and ROP count, there really isn’t anything architecturally different in the Pascal-based Titan X over a GeForce GTX 1080. Just more, better and faster. If you are new to NVIDIA’s latest Pascal architecture, product features and what the move to 14nm nets them, you definitely should read our GeForce GTX 1080 review that covers all of that!

What will you be asked to pay for this performance? $1200, going on sale today, and only on, at least for now. Considering the prices of GeForce GTX 1080 cards with such limited availability, the $1200 price tag MIGHT NOT seem so insane. That's higher than the $999 starting price of the Titan X based on Maxwell in March of 2015 - the claims that NVIDIA is artificially raising prices of cards in each segment will continue, it seems.

The NVIDIA Titan X (Pascal) Graphics Card

Our time was short with the new Titan X, as our team prepares for a three week whirlwind of events, but we wanted to get a quick review of this beast out the door ASAP.

View Full Size

The new Titan X features the same design language started with the GTX 1080, a rif on the now aging design for NVIDIA reference products. This includes a blower style cooler with an illuminated GeForce GTX logo along the top of the card (interestingly, one of only a few places I see referencing GeForce with this product) and a window to see the heatsink under the shroud.

View Full Size

Rotating the card around the back we find a full cover backplate on the Titan X with an optional segment on the back half you can remove to improve airflow on adjacent graphics cards in SLI. The backplate even has a custom Titan X stamp on it.

View Full Size

Though the shroud design is shared with the GTX 1080, the Titan X goes with a black out color scheme and a chrome “TITAN X” logo along the front.

View Full Size

Display connectivity remains unchanged: three full size DisplayPort connections, one HDMI 2.0a and a dual-link DVI connection for legacy displays.

View Full Size

With a 250 watt TDP, the card includes both a 6-pin and an 8-pin external power connection. This is more than enough to hit 250 watts but allows the card to draw as much as 300 watts when overclocked.

View Full Size

Titan X (Pascal) includes a set of SLI connections to support the new high bandwidth SLI connections, though still only in 2-Way SLI officially.

Video News

August 3, 2016 | 12:20 AM - Posted by Anonymous (not verified)

Till recently I was not aware that over-clocking GPU can put motherboard at risk.
Does any motherboard vendor advertise maximum power over PCIe?

August 3, 2016 | 12:51 AM - Posted by Anonymous (not verified)

The PCIe spec states that video cards can only draw a maximum of 75W over PCIe. To my knowledge, no consumer motherboard manufacturers increase this as it would be outside spec and require the vendor to remove official PCIe support and advertising (a similar thing is done with video cards with more than a 6 + 8 power connector).

August 3, 2016 | 08:47 AM - Posted by Anonymous (not verified)

True. Motherboard manufacturers, especially of enthusiast overclocking lines such as the current Z170 Intel boards, are particularly keen on ensuring that their premium overclocking motherboards crap out at anything over 75 watts when users attempt a power hungry overclock.

In addition, Motherboard manufacturers also ensure that there is absolutely no tolerance on their cheap motherboards to provide an iota above the 75W 'spec'. They are particularly interesting in having their boards fail in such a scenario as they can then use the RMA process as an opportunity to showcase their exemplary Customer Support.

August 3, 2016 | 11:03 AM - Posted by Allyn Malventano

Kidding aside, we've found that seven premium boards don't like high draw on the slot power. Our GPU testbed is a good quad-rated board and it drops 0.5V when supplying a single GPU over the current limit. These guys really need to stick to drawing most of their power from the 6/8 pin connectors.

August 3, 2016 | 04:17 PM - Posted by tatakai

wait, is that because of the slot power? I've found that GPU load typically drops 12v value. I assumed it was just the PSU itself not putting out the voltage because of the load.

You seem to suggest its the motherboard load that causes it... but that seems unlikely. Do you measure the voltage from the PSU directly or just use the reading from monitoring software? Would be solved by checking the PSU output voltage at the same time.

August 3, 2016 | 08:23 PM - Posted by HeavyHemi (not verified)

That is a very good point. It is unlikely he has the PCIe slot adapters needed to actually measure the voltage at the slot.

August 4, 2016 | 07:33 AM - Posted by amadsilentthirst

I see the day when a single GPU has an extended "bottom" and spans two PCIE Slots.

August 3, 2016 | 12:28 AM - Posted by Anonymous (not verified)

"Titan X is 70-120% faster than the fastest single GPU AMD graphics card"
Please test against RX480 in 4 way crossfire.

August 3, 2016 | 12:34 AM - Posted by Anonymous (not verified)

That's not a single GPU so the claim doesn't apply. As it is, Xfire scaling is hit or miss, so odds are the Titan X would still win a majority of tests.

August 3, 2016 | 12:43 AM - Posted by Ryan Shrout

I'd actually be willing to do this, if I had four cards! :D

August 3, 2016 | 02:08 AM - Posted by DK76 (not verified)

Time to call AMD, they should help you try to be that beast of a card...

But the power draw x2 as much tho. so really are you winning?

August 3, 2016 | 02:23 AM - Posted by JohnGR

Maybe you can't find 4 cards easily, but maybe you can find the latest drivers?

August 3, 2016 | 04:03 AM - Posted by Allyn Malventano

Maybe the latest drivers had some reported oddball issues that Ryan didn't want to waste his limited testing / writing time screwing around with?

August 3, 2016 | 09:34 AM - Posted by Anonymously Anonymous (not verified)

I find it hilarious that you have to resort to petty bickering with another reader. Seriously dude, grow up and be the professional that Josh, Ryan and Sebastien are and stop arguing and trying to prove you are better than a reader.

August 3, 2016 | 10:21 AM - Posted by flippityfloppit...

Lol @ the people the internet brings out.

August 3, 2016 | 10:59 AM - Posted by Allyn Malventano

Pretty much.

August 3, 2016 | 07:16 PM - Posted by pdjblum

Really. And you think your behavior is any different? You are defensive and sarcastic and disparaging. You should be receptive and cordial. Leave being dickheads to us readers.

August 3, 2016 | 12:17 PM - Posted by Josh Walrath

How dare you call me a "professional"!

August 3, 2016 | 01:00 PM - Posted by JohnGR

Josh: The Professional (1994)

Nice movie.


August 3, 2016 | 05:00 PM - Posted by CaptTomato


August 3, 2016 | 11:01 AM - Posted by Ryan Shrout

Actually, that was a typo from a copy/paste that I brought over. Thanks for pointing it out. Updated to the following:

AMD: 16.7.2
NVIDIA: 368.98

Why do you come around here if you don't like what I have to say? Honestly curious. 

August 3, 2016 | 11:11 AM - Posted by Anonymous (not verified)

Who cares?

The guy gets wrecked with every comment he makes. Kind of fun to see actually.

August 3, 2016 | 01:01 PM - Posted by JohnGR

A! OK. People are shouting at reviewers by not using the latest latest drivers. I think you can understand that being a number of versions behind would have even stronger reactions.

About your question.

Two reasons.

In the land of the blind you follow the one eyed people and while I might have some concerns about things you(the site) post and most times about things you don't post, I do agree like most people, that you do an excellent job in the technical side.

Second, not every one who goes hysterical in here wants to harm the site. Some people just want to push you to get better and more objective. A few years ago I was telling you that your promotion of AMD's A10 7850K was just too obvious.

August 4, 2016 | 09:27 AM - Posted by Anonymous (not verified)

Shouldn't your testing have been done using the 369.05 WHQL drivers that were released specifically for the Titan X(Pascal) by nVidia on 8/2? Just wondering...

August 3, 2016 | 06:58 AM - Posted by KeithPlaysPC (not verified)

How many do you have? I'd loan you mine for a few days to pull it off :)

August 3, 2016 | 12:02 PM - Posted by sixstringrick

Have PCPerspective, Paul's Hardware, JayzTwoCents, and TekSyndicate join forces for the 4 way 480 vs the TitanX.

August 4, 2016 | 01:07 AM - Posted by Anonymous (not verified)

(this) x10^32

August 3, 2016 | 10:10 AM - Posted by Wookie Groomer (not verified)

Don't forget to test that config against a single Titan X on games that don't support multi GPU while you're at it! dipshit...

August 3, 2016 | 12:00 PM - Posted by sixstringrick

The funny thing is, even though I sure it would have horrible scaling at that fourth card, it would still be cheaper to buy 4 RX 480s than to buy a single TitanX.

August 3, 2016 | 03:49 PM - Posted by Anonymous (not verified)

4 way crossfire? Wow you AMD fanbois are getting desperate

August 4, 2016 | 01:05 AM - Posted by Anonymous (not verified)

There are any number of ways to interpret that request that have nothing to do with "fanbois" or desperation. For example:

"Damn, that is one efking wicked graphics card. Is it even possible for a smash of RX 480s to catch up to it?"

"Nvidia's giving up on more-than-2-way SLI. AMD is all about supporting up-to-4-way CFX. Let's grab a new Titan X, two 980 Ti's, two vanilla Fury's, and four RX 480's and just see what happens."

"Just what kind of GPU would AMD need to build to compete with the Titan?"

"Hey, let's have some fun with this." Or, depending on the group, "Here, hold my beer, I got an idea."

Why do you imbeciles always have to default to fanboy accusations?

August 5, 2016 | 01:44 AM - Posted by Anonymous (not verified)

Both SLI and CrossFire suck big red donkey wankers, it's Vulkan/DX12 multi-GPU adaptor managed via the API with the games/gaming engines managing all the GPU resources. That way any GPU plugged into the computer can get gaming workloads sent by the games/gaming engines. You can be damn sure the gaming engine makers will be competing with each other to get their gaming engines to get the most out of all the available GPU power available on a PC via Vulkan/DX12! And with the whole damn gaming industry, OSs industry, API developers working to get the best multi-GPU scaling going for VR and 8k gaming and getting every last bit of performance out of multi-GPU adaptors with the Vulkan/DX12 multi-GPU adaptor maybe even 4 RX 480s, and 4 GTX 1060s could be tested, JHH may not allow GTX 1060s to use SLI, but what is JHH going to do to stop multi-GPU adaptor in the Vulkan/DX12 APIs from working!

December 12, 2017 | 03:21 PM - Posted by Maurice (not verified)

everything he possibly can do, he does NOT want any graphics card linking with HIS, which is why multi-adaptor i.e Radeon and Geforce was essentially "killed" many generations ago, primarily because the Radeons were pushing much better results with PhysX than Nvidias own products were able to do, cant have that lol.

Lord knows JHH/Nvidia are all about $$$$$$$$$ not making happy customers ahem Gsync when they so could have jumped on board Freesync so everyone gets the BEST experience possible no matter if you choose to buy a Nvidia, AMD, Via or whatever.

Am sure if he had it his way (surprised has not done so yet) would be planting a chip in every Nvidia gpu that would prevent multi-gpu use beyond "spec" i.e something like a 1050Ti would NOT be able to use more than one per system.

Gratz in wanting to show your product as very good, terrible for us consumers because we can have real choice if they could all just "get along"

The thing with explicit multi-adapter, multi-gpu via DX12/Vulkan, it relies on the developer to ALLOW i.e coded for its use on a per game basis, Nvidia sure as hell are not going to force the code nor will AMD to truly implement it via their own product drivers which may pse performance issues or prevent their own products from selling as they would like.

Game devs are awesome as games so are not easy to make high spec games we love to play, but, they are usually quite horrid at giving the best performing/robust code possible for that specific title, for example, DX12 titles, there is like 2-4 that use it quite well, but, are not using EVERYTHING DX12/Vulkan are capable of either, the remainder that have DX12/Vulkan on the box really do nothing from it IMHO, it just "sells" kind of like "made for Windows version X, Y, or Z" often does NOT work as it should lol.

August 3, 2016 | 12:38 AM - Posted by zmeul (not verified)

no thermals ?!
I have a feeling this card will throttle just like the Maxwell v2 Titan X did

August 3, 2016 | 12:43 AM - Posted by Ryan Shrout

There are some temperature listings on the overclock page.

August 3, 2016 | 12:39 AM - Posted by Anonymous (not verified)

"testing at 4K with the GTX 1060 running at a +150 MHz offset."

1060 should be Titan X

August 3, 2016 | 12:44 AM - Posted by Ryan Shrout

Whoops thanks!

August 3, 2016 | 12:52 AM - Posted by LTT Fan (not verified)

It's amazing to see that a 5960X is the bottleneck when running GTA5. Just how taxing on the cpu is that game?

August 3, 2016 | 12:59 AM - Posted by Anonymous (not verified)

If, for example, the game is coded to operate on only 4 cores, then the 5960X is no better than an underclocked i5-4690K.

(I would honestly have no idea how many cores GTA5 uses, I'm just brainstorming.)

August 3, 2016 | 01:06 AM - Posted by LTT Fan (not verified)

I actually have a 4690k and all 4 cores are almost always pinned at 100% while I'm playing gta5. I know that the game needs more than a quad core, but I'm surprised that at hyperthreaded eight core isn't enough.

August 4, 2016 | 04:03 AM - Posted by BlackDove (not verified)

No CPU is powerful enough to handle the power vorus 2D menus known as Scaleform UI.

I would LOVE to see Ryan do an investigation into this phenomenon, but i dont know if anyone will.

I, and many others, have been bitching about Scaleform UI menus being power viruses that max out CPU and GPU arbitrarily.

Most modern games use it and it seems that in many implementations it causes the GPU to boost to max clock and thermal limits for no real reason.

August 3, 2016 | 05:42 AM - Posted by Anonymous (not verified)

Same happens with 1080 SLI other games other games i seen this happen BF4 siege of shanghai map

August 3, 2016 | 05:44 AM - Posted by Anonymous (not verified)


like other said its utilization not hardware bottlneck

August 3, 2016 | 12:52 AM - Posted by technolucas (not verified)

NVidia's launch schedule is much tighter this year, no? Makes me wonder how long before the 1080ti comes out....

August 3, 2016 | 02:25 AM - Posted by JohnGR

No the real question here is, how long before a Pascal refresh.

August 3, 2016 | 07:04 AM - Posted by PeterMantle (not verified)

We will see a 1080 Ti and then next line up will be Volta in Q2 2017.

Skipping Pascal for sure. Not impressed AT ALL.

August 3, 2016 | 10:36 AM - Posted by BlackDove (not verified)

Considering that GM200 and GM204 showed the least generational improvenents in ages, because they were 28nm just like GK110 and GK104, what kind of improvements were you expecting?

GP100, GP102 and GP104 are all huge improvements on what they replace, unlike the underperforming 900 series.

August 3, 2016 | 12:09 PM - Posted by TREY LONG (not verified)

Pure rubbish. Huge improvement over the cards they replaced.

August 3, 2016 | 09:23 AM - Posted by malurt

More like "launch" ... when can you actually just buy one and not wait a month?

August 3, 2016 | 01:03 AM - Posted by donut (not verified)

So 4K gaming for the mainstream is years off by the looks of it.
I love PC gaming but not at this price.

That's one fast card tho.

Thanks for the review.

August 3, 2016 | 01:15 AM - Posted by Anonymous (not verified)

this year's 1070 is as fast as a Maxwell Titan X. So more than likely an 1170 will be as fast as a Pascal Titan X, so I'd say two years if mainstream is considered no more than $250 (GTX 1260).

August 3, 2016 | 02:59 AM - Posted by JohnGR

1170 will probably be as fast as 1080. Don't be fulled by the jump from Maxwell to Pascal. We moved from 28nm to 16nm FinFET with that change.

August 3, 2016 | 07:06 AM - Posted by PeterMantle (not verified)

970 was on par with 780 Ti, both on 28nm.

Nothing about Pascal is impressive so far. Same performance per dollar, and still lacking hardware async compute support.

August 3, 2016 | 10:40 AM - Posted by BlackDove (not verified)

Asynchronous compute would help to some degree, but more memory bandwidth and decreasing latency between the CPU and GPU would lilely do much more. There are also many asynchronous compute techniques, so which are you referring to?

October 5, 2016 | 09:40 AM - Posted by Ninjawithagun

Incorrect. Pascal has built-in hardware async compute:

The biggest difference in Pascal's async compute capabilities over Maxwell 2 are 1) dynamic load tasking (vs. static in Maxwell 2), and 2) 25% increase in overall concurrent execution.

The bottom line is that Pascal does have true hardware async compute functionality. Not saying it's better than AMD's approach, but it is there nonetheless.

August 4, 2016 | 07:12 AM - Posted by Autherie (not verified)

No, not true. 1070 can't touch Titan X Maxwell when both cards get overclocked to their max (2.1GHz vs 1.5GHz).

October 5, 2016 | 10:07 AM - Posted by Ninjawithagun

Incorrect, though the performance is close when comparing a stock clocked Pascal card to a highly overclocked Maxwell card of the same class (i.e. GTX1080 vs. 980Ti). But don't even try comparing a Maxwell Titan X to a Pascal Titan X...the 'old' Maxwell Titan X gets its ass kicked no matter what the clocks ;-)

August 3, 2016 | 02:10 AM - Posted by DK76 (not verified)

2-3 years i would say, for mainstream...

stay tuned!

August 3, 2016 | 01:19 AM - Posted by biohazard918

"the claims that NVIDIA is artificially raising prices of cards in each segment will continue" What do you mean claims? The numbers don't lie its a fact.

August 3, 2016 | 02:55 AM - Posted by JohnGR

The original Titan was created for two reasons. Marketing and creating newer higher price points in the market. Nvidia saw that integrated graphics in AMD's APUs and latest Intel processors where killing the low end market, so they needed more and pricier models in the hi end category. In the past you had multiple NEW low-mid range models. Today Nvidia gives you 1-2 new low-mid range models later and 2-4 new models in the hi end market.

August 3, 2016 | 10:43 AM - Posted by BlackDove (not verified)

Was it? I thought it was also the cheapest way to get over 1 TFLOPS double precision with 6GB of RAM. The K6000 was $5,000. The Titan was an amazing value for anyone who needed DP FLOPS but not necessarily ECC.

August 3, 2016 | 11:37 AM - Posted by Anonymous (not verified)

Yes and the same for the AMD's Radeon Pro Duo for a developer to be able get at the pro drivers and develop for the pricy Pro versions without having pay as much. So Nvidia did have the same thing in mind for the Titan X, but did Nvidia even allow for developers using the Titan X to get at the pro versions of Titian's drivers. $1500 is still less costly for AMD Radeon Pro Duo customers than $2000 to $4000, the same for Nvidia's Titan X used for compute without having the expensive pro hardware/drivers. AMD is a little bet more of a bargain for developers by offering access to pro drivers for development, and I am not sure if Nvidia offers the same driver deals with Titan X for developers.

August 3, 2016 | 03:58 PM - Posted by Anonymous Nvidia User (not verified)

That $1500 looks like a bargain when you consider 2x Furyx at $400 a piece. 88% more costly. Almost double. Not so much. Even compared to Titanx Pascal it costs 25% more.

Both cards performance will be in the same neighborhood when there is good cross fire scaling for the pro duo. When there is not it will lose by 70-120% as it will be a single Furyx core vs the beastly Titanx.

The Titanx has 31% less compute than pro duo. It essentially has 3x the vram as it is not additive in crossfire. But even comparing 8 gigs vs 12 gigs then Pro duo would still have 33% less vram to use.

August 3, 2016 | 07:36 PM - Posted by Anonymous (not verified)

But AMD gives the pro drivers with the deal, so the Radeon Pro Duo can be used to develop for the production/professional GPU cards that cost much more! Is Nvidia offering the Titan X users the pro/Quadro drivers or will JHH make the developers buy the more costly $5000 Quadro SKUs to get at the Pro drivers. That pro driver certification adds a lot of cost to the pro SKUs price, so AMD's Radeon Pro Duo SKU(at $1500) is a grate deal that allows developers to get at the Pro drivers from AMD.

2 Fury X gaming SKUs do not have the pro drivers, so no developer can use the gimped down Fury X gaming drivers for Pro development. AMD's Radeon pro/real professional drivers are worth almost as much if not more than the price of the Radeon Pro Duo's hardware, and the certification of the Professional drivers is very costly process to get the drivers to work with the professional graphics software.

The real deal with the Radeon Pro Duo is the pro drivers, and not the hardware as much! It's good to have the GPU hardware but its great to have access to the Pro Drivers without paying $2500+ for the FirePro(Now called Radeon Pro WX series) hardware.

Gaming drivers are a joke in the professional world, as gaming drivers are gimped for FPS and not accuracy. Please do not confuse the two as they are very different for very different workloads, and the pro drivers can and do cost just as much as the hardware, and even more over the long term as professional drivers come with long term professional driver support, and that is a very high cost to maintain professional driver support! That is what makes the professional GPUs so expensive.

August 3, 2016 | 12:45 PM - Posted by JohnGR

And look where Titan is now. Just an overpriced gaming card with extra memory.

The original Titan was a semi pro card because it needed to make sense. It was creating a new brand. But was also advertised and pushed as the absolute gaming card creating a higher price point also for the gaming cards.

In the past you had plenty of mid-low end models and only one hi end. Now you get 3 hi end models - Titan, X80Ti, X80 - and barely a new low end.

That was Nvidia's business plan from the beginning because Nvidia doesn't sell desktop x86 processors with integrated graphics or chipsets with integrated graphics.

October 5, 2016 | 10:11 AM - Posted by Ninjawithagun

AMD is just as guilty. In fact, one could argue that Nvidia wouldn't be able to increase prices if there were actually any real competition in the market place. Where the hell is the Polaris high end and enthusiast cards? The RX490 is way overdue, yet is nowhere to be found, and AMD has yet to officially announce specs or a release date...

August 3, 2016 | 02:13 AM - Posted by Xanavi

Meh, I'll just buy another 980 used for $250, lol.

August 3, 2016 | 05:45 AM - Posted by Anonymous (not verified)

good luck when SLI isn't an option i ran SLI 980 now i have 1080 SLI at least one 1080 almost as fast as my 980 SLI so good failsafe when SLI isn't option.

August 3, 2016 | 06:54 PM - Posted by Xanavi

I only have a 1080p Gsync 144hz, but you may be right. It's not like I'm running out to do it now, just saying the value proposition is there. On the flip side I'm not really interested in games made by lazy developers.

August 4, 2016 | 12:19 PM - Posted by Ninjawithagun

I own two GTX980Ti cards (still in my main system), an MSI GTX1080 Sea Hawk X (Hybrid), and a Titan X arriving tomorrow. I can tell you right now that a single GTX1080 does not come close in most games to performance vs. two GTX980Ti cards in SLI. The gap does get tighter at higher resolutions, but still not the same or near the same performance. Now, a single Titan X vs. GTX980Ti SLI will be a whole different story ;-)

August 3, 2016 | 02:20 AM - Posted by JohnGR

Oh look. 30% extra performance for twice the price. Die AMD die.


By the way. Maybe you should update your AMD scores with newer AMD drivers? Just see it as an excuse to write an article with title:
Crimson 16.5.2 vs 16.7.3
or something.

August 3, 2016 | 06:22 AM - Posted by JohnGR

Well you haven't passed one driver version. You are using drivers 2-3 months old. Do a tech site needs someone to remind them that they have to update drivers more often than the casual user? I guess so.

August 3, 2016 | 10:54 AM - Posted by Allyn Malventano

Yes, tech sites that give a crap need to use drivers that actually work, especially when trying to get a review out the door with limited testing time. We try to not just throw a new driver into the mix until we can make sure the thing works properly, and it's pretty obvious that one doesn't, so no use wasting the time on it.

Also, we typically get a heads up from AMD if large performance improvements are expected in an upcoming driver. We got no such notice since the PCIe power fix, but I think Ryan was only one version behind this latest one regardless.

August 3, 2016 | 12:50 PM - Posted by JohnGR

Ryan wrote that it was a typo, the end.

October 5, 2016 | 10:13 AM - Posted by Ninjawithagun

Yes, the end for AMD...$6/share...pffft, nuff said. And RX 490 is AWOL :P

August 3, 2016 | 04:42 AM - Posted by Anonymous (not verified)


August 3, 2016 | 11:03 AM - Posted by Ryan Shrout

That was a copy/paste typo. I am using 16.7.2, same as for the RX 480 review.

August 3, 2016 | 02:52 AM - Posted by pdjblum

When will you be finished with the Saphire 480 Notro review? You seemed to be well into it at the time of the Nitro podcast.

August 3, 2016 | 04:10 AM - Posted by Allyn Malventano

We're in the pre-QuakeCon crush. Might be a few weeks. 

August 3, 2016 | 04:42 AM - Posted by Ha-Nocri (not verified)

Will you test it in DOOM Vulkan/Async finally?

August 3, 2016 | 11:04 AM - Posted by Ryan Shrout

Probably never.

August 3, 2016 | 06:57 PM - Posted by Xanavi

Hey Ryan, do you think if AMD called it the RX470 that people wouldn't be so quick to compare it to obviously higher end equipment? Marketing and fanboys eh?

August 4, 2016 | 08:51 AM - Posted by Ha-Nocri (not verified)

Included in actual reviews?

August 3, 2016 | 04:14 AM - Posted by Richard Schlegel (not verified)

This is not a gaming, but a deep learning card. So lets make 10 sides of gaming tests. :^)

August 3, 2016 | 10:45 AM - Posted by BlackDove (not verified)

Most people who do serious work with GPUs would need to verify their code on it themselves or read a site like The Next Platform(i post on there constantly too).

August 3, 2016 | 03:52 PM - Posted by Anonymous (not verified)

Wrong, Its completely a gaming card. The deep learning (or whatever) version is the P60000 Quadro.

August 4, 2016 | 12:26 PM - Posted by Ninjawithagun

You are actually incorrect. You might want to re-read Nvidia's press release for the Titan X. There is a reason it's officially NOT part of the "GeForce" family. Adding confusion to it all, to save time and to get the cards out for sale, Nvidia used an old shroud design on the Titan X that reads "GEFORCE GTX". My guess is that the second revision will have a new shroud that says "TITAN X" :D

August 3, 2016 | 04:30 AM - Posted by endlesswargasm (not verified)

GPU Boost continues to make a mockery of TFlop ratings.

TF @ 1417 MHz = 10.2 (Rated Clock)

TF @ 1535 MHz = 11.0 (Markting Clock?)

TF @ 1660 MHz = 11.9 (Stock Avg Gaming Clock)

TF @ 1838 MHz = 13.2 (PCPer OC Clock)

August 3, 2016 | 11:04 AM - Posted by Ryan Shrout

It definitely makes things more complicated. In my view, you want these companies to be conservative.

October 5, 2016 | 10:29 AM - Posted by Ninjawithagun

I was actually very surprised that an overclock of 1838Mhz was even attainable, let alone stable in benchmarking. I had thought that the stock heatsink and blower fan would not be up to the task. I can't wait to get my Titan X on water, then we'll see how much higher that GP102 will go!

*UPDATE* 5 October 2016

I've had my Titan XP watercooled now for over a month and am very happy with the performance:

Maximum GPU boost (120% power target, 92C temp target):
GPU @ 1864Mhz
GDDR5X @ 10000Mhz
Idle temp: 26C
Max temps @ full GPU load (benchmarking) 46C

Maximum manual overclocks @ stock voltage:
GPU @ 2086Mhz
GDDR5X @ 11000Mhz
Idle temp 28C
Max temps @ full GPU load (benchmarking) 50C

All temperatures include an Intel Core i7 3930K overclocked @ 4.5Ghz (1.41V) within the same cooling loop, so the temps are indeed awesome ;-)

August 3, 2016 | 05:14 AM - Posted by Anonymous (not verified)

lol amd better hurry up because by the time vega comes out they will be on volta. If that happens they will be so far behind they will be stuck doing budget cards. They did that with there cpus and look how that worked out. Oh and the next time I have to here about 4x480s I am gonna kill myself. 4 way with cards causes so many problems. 2 is fine but even 3 is just garbage, I did it with my 680s and sold the 3rd 3 weeks later. Besides could you imagine the wattage draw on amd's power hungry cards. AMD has nothing to even come close to Nvidia in performance. The 480 is so underwhelming they had no choice but to sell it dirt cheap. It is a k-mart card. I guess I am happy to see the people on wel-fare getting to play pc games with decent settings tho.

August 3, 2016 | 09:26 AM - Posted by malurt

So you're saying that nvidia will launch 2 new chips in the space of 6 months? Impressive, but doubtful considering the nvidia greed :p

By the way, try to sound like less of a complete dick. It'd help you being taken seriously.

August 3, 2016 | 04:58 PM - Posted by tatakai

I think GPUs are actually different. With CPUs the main issue would be single threaded vs multi threaded. But GPUs have no such distinction.

If AMD has a good single threaded CPU with fewer cores and did budget, there would be no problem unless intel somehow managed to push multithreading and exposed that flaw.

With GPUs, its all about price points and performance. The only way it would be a problem is if nvidia managed cheaper faster GPUs vs AMD. AS titan x at $1200 is hardly going to change much for the average gamer except maybe fool them into thinking that means all nvidia GPUs are faster.

The 480 was always targeting the price point it is now, and imo they aren't selling it dirt cheap. The price some of these go for is silly. Your sentiments are silly actually. K-mart card? I bet you can't even afford a titan X and here you are fronting. Waiting till nvidia drops a card in your budget range so you can pretend its a titan x.

August 3, 2016 | 05:45 AM - Posted by Ha-Nocri (not verified)

New Article from PCPer: "Titan X fails PCIe specs, draws more than allowed when OC'ed"... I'm so glad you started this "issue", which actually isn't an issue at all. But I guess it was in AMD's case?! :o

August 3, 2016 | 10:57 AM - Posted by Allyn Malventano

Actually, the 480 in 'fixed' form does the same when overclocked. We didn't make a big deal out of that when we retested it, for the same reason we are not making a big deal about it here.

August 4, 2016 | 06:03 AM - Posted by Ha-Nocri (not verified)

Two problems with that.

1. ppl who buy 480 rarely OC it, while those that buy Titan X will most likely do the OC
2. most ppl will buy custom 480's which don't have the issue, while you can only buy reference Titan X.

So, I think we have a serious issue here by what you said in the 480 article (even tho I think it was never rly an issue)

August 4, 2016 | 12:33 PM - Posted by Ninjawithagun

Overclocking is done at the risk of the owner. Use supplemental PCI-E power connector on the motherboard, and BOOM - no more issues. NEXT!

August 3, 2016 | 05:48 AM - Posted by Anonymous (not verified)

If i didn't get my 1080 hybrid SLI i would have got this as seems perfect single solution for 1440/165 my only issue is this is nvidia exclusive so when you go to sell the warranty isn't transferable also removing cooler to install aftermarket cooler voids warranty if they find out.

August 8, 2016 | 06:08 AM - Posted by Ninjawithagun

Agreed and is definitely a bummer. Modding a $1200 graphics card is not for the faint of heart - even if in this case it's the simple removal of a stock heatsink and installation of a water block. It is still something to be taken into account. I for one, have decided it's worth the risk. My EKWB water block should be here by end of next week. If anyone is interested in seeing the results of the installation process let me know on PM - thanks!

August 3, 2016 | 06:30 AM - Posted by Michael Rand (not verified)

The price in the UK is £1,099.00 and I'm really really tempted to upgrade my 980 Ti, but the 1080 Ti may be a thing though.

Speaking of the 1080 Ti, how do you think they will do the memory this time, considering the 1080 has 8GB, surely they can't give the 1080 Ti the same amount.

August 3, 2016 | 09:27 AM - Posted by malurt

Probably 12 on the Ti and it'll perform roughly like this card.

It'll also be same G5x RAM. HBM won't be a thing till next year for nvidia at the earliest.

August 3, 2016 | 02:52 PM - Posted by BlackDove (not verified)

Nvidia has been using HBM2 on the P100 for months now.

August 3, 2016 | 05:00 PM - Posted by tatakai

is that even in use anywhere?

August 4, 2016 | 04:06 AM - Posted by BlackDove (not verified)

They sell in batches of thousands for a single supercomputer. They are selling them as fast as they can make them just like Knights Landing and Fujitsu PrimeHPC FX100s.

August 4, 2016 | 09:11 AM - Posted by malurt

Not really relevant, is it?

August 5, 2016 | 03:15 AM - Posted by BlackDove (not verified)

It is a big thing for Nvidia though. Your statement is false.

August 4, 2016 | 11:53 PM - Posted by Anonymous (not verified)

And on the HPC/Server/workstation SKUs is where that HBM2 will stay because the markups on the Real Pro GPU accelerator/workstation SKUs is much larger that any consumer SKUs! And until the HPC/Server/workstation market's thirst for HBM2 is quenched the gaming market is not getting it's hands on HBM2! Money talks, and JHH is all about the markups!

August 3, 2016 | 06:32 AM - Posted by Pete21 (not verified)

Top AMD GPU is now 70-120% slower than Nvidia!!! Holy crap! No wonder Nvidia is getting away with murder pricing!

August 3, 2016 | 07:08 AM - Posted by PeterMantle (not verified)

400$ vs 1200$ .........

August 3, 2016 | 02:36 PM - Posted by Anonymous Nvidia User (not verified)

I always love when people compare current price to something that just came out at MSRP. The Furyx was $650 when it released.

Look I found a Furyx for $697 dollars does this count.

Anyways where are the benches against the pro duo at $1500. Looks like the Titanx Pascal would wreck it and it's a single GPU and cheaper to boot. LOL

August 3, 2016 | 05:06 PM - Posted by tatakai

I'd compare best price vs best unless best was out of stock. $400 for a fury x is pretty decent as that is cheaper than a 1070 is right now (without lucky disconts anyway).

ultimately its of no concern. At least I would hope AMD is not thinking the way you guys are. For their sake. I would assume even you would buy a $500 AMD GPU if it gave you 80-90% the perf of a titan X.

The main obstacle for AMD is dx11 efficiency. For dx12 they should be fine beating a titan x at its base.

August 4, 2016 | 01:15 AM - Posted by Anonymous (not verified)

If AMD released a $350 card that gave you 150% of the performance of a Titan XP, he still wouldn't buy it, because it's AMD.

August 4, 2016 | 06:26 PM - Posted by Anonymous Nvidia User (not verified)

Even as inept AMD is as a company, they would not sell such a card for only $350. It's your idealization of AMD in that they price their products to help you. They do no such thing.

FX 9590 was priced at $850+ when released, it's currently $190. Furyx was priced at $650 same as 980ti eventhough can't beat it in most games.

No I still wouldn't buy the AMD card because it would still be missing hardware Physx and still couldn't handle tesselation and still have frame time graphs that look like earthquake seismograph. Among other things.

Let's be realistic here. AMD is calling Vega enthusiast level, what do you think their price is going to be? Not no $350 unless it is barely faster than 2 4 gig Rx 480s.

Especially with hbm2 I'm betting it will be a lot closer to $1000 than $350. Unless performance is total crap. It won't be because AMD will tweak and overclock it to the moon to get performance at least on par with Nvidia and wattage be damned. AMD fanboys don't care about no stinking wattage.

August 5, 2016 | 01:05 AM - Posted by Anonymous (not verified)

Fury X/Fury were priced to pay for all those years of HBM R&D and those costs need to be amortized. The low price on the RX 480, and RX 470 is to get market share and the revenues that go with increased market share! Revenues pay the bills and after the bills are paid if any cash remains then that is declared profits. AMD needs to increase its revenues to pay for the R&D, pay the salaries of employees, debt payments, and mostly to stay in business.

Profits are not as necessary as revenues, a business can go on for quite a while as long as there are not great amounts of losses each quarter. So AMD can keep going on as long as it has revenue growth, revenue growth can get the banks to lead even more short term debt money to keep the business going, and the banks, if they see revenue growth along with new products/product innovation and market share growth/new market growth, will continue to lend and continue to refinance any short debt into long term debt and continue to renegotiate long term debt further into the future.

Banks do not care about of their business customer’s profits, banks only care about their business customer’s ability to make their debt payments! So AMD’s banks will look at AMD’s revenues, its revenue growth, its market share growth, any new market business that AMD gets, and AMD’s total debt load relative to AMD’s market cap, and total revenue growth projections. That Zen news is also good for more good faith lending from banks, as AMD will be getting some revenue growth from the server market that it lost a while ago. Zen does not have to beat Intel’s latest CPU cores outright, and AMD’s GPUs do not have to beat Nvidia’s GPUs outright, AMD only needs to be the price/performance winner in the mainstream GPU market, and the CPU/APU market. AMD’s server market share can only go up because AMD server market share is practically nil, and Zen will get AMD some more server market share, and Polaris will get AMD more mainstream GPU market share.

AMD is so lean from all those years of cutting back that it only needs to simply get back into the server business with Zen(some Zen server SKU revenues is better than none at all and in the server market AMD is already close to none at all)! And likewise AMD need to get more revenue/market share growth with Polaris in the mainstream GPU market to really turn things around. AMD needs to take all of that HBM2 it will be getting and put that towards its professional HPC/Workstation/server GPU accelerator products and HPC/workstation APUs on an interposer products because that’s where the real markups are and only use HBM2 for its flagship consumer SKUs. AMD is so far down that it can only go up, as long as Zen is near Haswell levels of IPC performance and Polaris can get more mainstream GPU market share and the revenues/revenue growth that goes with more market share in the GPU market. AMD only needs to worry about that price/performance metric and it does not even need to take any Flagship performance crown to do that.

August 5, 2016 | 07:57 PM - Posted by Anonymous (not verified)

I admire your efforts, but you shouldn't bother bringing facts and logic when talking to him. He doesn't care about facts and logic, he has AMD bashing to do.

August 5, 2016 | 09:37 PM - Posted by Anonymous Nvidia User (not verified)

What you say is true. With their IP they can stay in business until 2020 by not making a dime. That is when their 2.2 billion they refinanced is due to be paid. Are they going to get lucky and refinance that 2.2 billion yet again. So the answer is they need to start making some serious cash or it could be curtains for them.

I wouldn't even wish that on AMD. Unlike AMD fanboys who fantasize about Nvidia's demise.

August 5, 2016 | 07:54 PM - Posted by Anonymous (not verified)

You're an idiot. Either you just don't understand hyperbole, or you're deliberately ignoring it and taking the statement literally because you know perfectly well I'm right and you can't stand the idea of admitting it.

Never once did I say, or even hint, that I expected a $350 top-tier Titan XP-killing graphics card from AMD.

What I said was, it wouldn't matter IF AMD released such a card at such a price, you still wouldn't buy it because your Nvidia fanboyism is infinite. IF, by some strange, magical, completely impossible confluence of events, AMD just one day released an architecture that absolutely ripped Nvidia a new one, in every performance tier, in every price point, at every power consumption level - and let's even pretend that they manage to be capable of handling all the Gameworks crap that was supposed to wreck them - you STILL wouldn't buy them, because you're a blind die-hard fanboy.

It is, and was, an incredibly hyperbolic hypothetical, and thus your entire response is completely irrelevant. You get no points.

August 5, 2016 | 09:41 PM - Posted by Anonymous Nvidia User (not verified)

Wow. Who's the fanatic here. It isn't me. I know you wouldn't buy Nvidia so me making a bs statement on the same line as you did wouldn't yield any meaningful exchange anyway. I wouldn't waste my time even typing a response to your fanboyism.

August 3, 2016 | 03:59 PM - Posted by Anonymous (not verified)

...and I can buy 10 Kias for the price of 1 Ferrari. Whats your point?

August 3, 2016 | 08:43 AM - Posted by kenjo

well what does it mean that something is 120% slower ???

if the nvidia card does 60fps then the amd card would do what ?? negative 12 fps what is that.

August 3, 2016 | 02:29 PM - Posted by Anonymous Nvidia User (not verified)

25 fps vs 60 is 120%

August 4, 2016 | 10:53 AM - Posted by kenjo

no it is not. 25 is 42% of 60. 120% of 60 is 72.

so again something being 120% slower do not make much sense.

August 4, 2016 | 06:53 PM - Posted by Anonymous Nvidia User (not verified)

100% = double. 25x2 = (100%) 50 fps. 20% of 50 is 10 frames. So it is 120% slower at 25fps. 50 + 10 fps = 60 fps.

If I halve 60 fps to 30 which is 100% and 20% of 30 is 5 you still get 25.

I know it's confusing but 120% of 60 would be 132 fps.

72 would only be 20% more frames than 60. Not 120%.

Hope this clears it up.

August 5, 2016 | 05:18 AM - Posted by kenjo

I know it's confusing but 120% of 60 would be 132 fps.

wrong. it's 72

72 would only be 20% more frames than 60. Not 120%


Hope this clears it up.

No it do not.

when someone say X is "something" compared to Y. And that something result in a percentage then it's the Y value that is the reference value.


X is 120% of Y. we take Y*1.2
X is 120% faster than Y. we take Y+Y*1.2
X is 120% slower than Y. we take Y-Y*1.2

It's the last one that make Zero sense when we are on a scale that can not be lower than 0.

clearly the person made some mistake but what ?? did he mean 20%, did he just switch around X and Y what ?? its simply not possible to understand what it means to be 120% slower than something else.

August 5, 2016 | 10:22 PM - Posted by Anonymous Nvidia User (not verified)

From a numeric standpoint you are correct. I did not mean the elementary crack towards you Kenjo. Sorry. It was directed at anonymous who constantly hounds me. They said 70%-120% faster. Of which 120% is 70% + 50%. We'll use 100 to make it simple. 70% faster would be 170. 120% would make it 220. 100% faster would be 200. Even doing it the 100 plus 70% = 170 + 50% of 100 is 50. So 170 + 50 = same 220.

He could have said it gets 170%-220% of the performance of the lesser card and still be correct.

August 5, 2016 | 08:02 PM - Posted by Anonymous (not verified)

What a surprise, you're wrong again.

100% of 60 = 60
100% faster than 60 = 120
100% slower than 60 = 0


120% of 60 = 72
120% faster than 60 = 132

August 5, 2016 | 09:52 PM - Posted by Anonymous Nvidia User (not verified)

Exactly faster than is the qualifier. Not just computing raw numbers. 120% faster doesn't mean take all and subtract 20% from it. So 60 frames take 100% all frames away = 0. 20% of 60 is 12. So -12 fps is logical. Go back to elementary school and learn math.

If he said it is 220% frame rate it is also double plus 20% purely from a numeric standpoint.

I'm done if you can't get it you probably won't anyway. It was obviously what they meant and not 20%-70% faster only.

So answer me this. Is 60 fps not 120% faster than 23-25 fps. I'm waiting. Nod your head yes instead of twisting things to feel that you are right.

August 5, 2016 | 10:05 PM - Posted by Anonymous Nvidia User (not verified)

The Titan is faster so you go from the lower number and work up from there. No one said that the lower card got 60 fps. I just used the number for convenience. I used 60 fps as baseline and assumed that this would be taken to mean the Titanx was the 60 fps. So 23-25fps is indeed 120% slower than 60 fps or Titanx is 120% faster than it as well.

So maybe you miscomprehended it but I don't think so.

August 3, 2016 | 12:00 PM - Posted by Anonymous (not verified)

Yes but RX480 is not for the same market, Vega is for competing with Titan X. But I'd love to see 4 RX 480s using the Vulkan/DX12 optimized games and both Vulkan's and DX12's Multi-GPU adaptor ability, once all the kinks are worked out of Vulkan/DX12 multi-GPU adaptor and the gaming market has more Vulkan/DX12 optimized gaming titles. That Multi-GPU adaptor technology made available for the games to access via Vulkan/DX12 is going to be great for multi-GPU usage of cards like the RX 480, and even the GTX 1060(no SLI support from Nvidia)!

Yes the TOP flagship GPU SKU from AMD is getting a little long in the Tooth compared to Nvidia's latest, but at least that TOP flagship GPU SKU from AMD from the older generation is getting some async-compute improvements while Nvidia's long in the tooth former TOP flagship SKU is also not as powerful as the new Titan X(Pascal), and the long in the tooth former TOP flagship SKU from Nvidia has no ability to make use of async compute as some even older GPUs from AMD.

August 3, 2016 | 07:44 AM - Posted by Anonymous (not verified)

Doom Vulkan benchmarks?

August 3, 2016 | 07:53 AM - Posted by Anonymous (not verified)

How is the mining of this thing?

August 3, 2016 | 08:03 AM - Posted by John H (not verified)

Please consider testing ARK survival evolved in the future.. it is now in the top 5 played on steam and with its unreal 4 engine it crushes video cards.. even a 980ti OC won't maintain 60 fps at 1920x1080 at max settings.

August 3, 2016 | 10:47 AM - Posted by BlackDove (not verified)

Considering that its early access probably forever and totally broken its not a good benchmark.

It is a good power virus that doesnt get throttled though. It also maintains its arbitrary 100% GPU load even when Alt Tabbed.

August 3, 2016 | 10:53 AM - Posted by John H (not verified)

Fair -- Yes Early access isn't ideal.. and it's not perfectly optimized.. but it is a very popular next generation title that people are playing for hundreds and even thousands of hours. They've also been working with Nvidia, AMD, and Microsoft a lot to get DX12 up and running (which has not happened yet unfortunately).

It IS a much more complete game than almost all other survival games out there, despite being early access though..

I wouldn't say it's broken though. If you have a good (fast) server, and good PCs to play it on you do not experience the hitching or other stuff. The 100% GPU load just means it's rendering in the background. If you VSYNC it and your video card is faster than 60 fps, you'll see <100% load.

August 3, 2016 | 01:38 PM - Posted by CNote

I'd like to see it since it one of the few games I consistently play besides GTA V and Kerbal. Sure it can be a bit buggy but it looks fantastic using the Unreal 4 engine, like John said above DX12 coming soon. It swamps my 970 at ultra so I drop it down a bit, taking some advice from Sapphire Ed. Plus if they wanted to they could just host a game and not involve any networking.

August 4, 2016 | 01:43 AM - Posted by BlackDove (not verified)

Pretty sure its just that Autodesk Scaleform UI is a power virus actually.

August 3, 2016 | 08:26 AM - Posted by gamer (not verified)

is it hard and big work for just show fps each gpu.
thouse lines way to show it,i s not good.

so,why,why not fps also??

August 3, 2016 | 08:28 AM - Posted by Ivan3

There not gonna be a whole Titan X video episode with Tom? :p

August 3, 2016 | 11:06 AM - Posted by Ryan Shrout

Doesn't appear so...I'm sure you were just fishing for a contest to win a free one? :)

August 3, 2016 | 07:45 PM - Posted by Stefem (not verified)


No seriously, it's always nice to talk with Tom, he is really technically prepared unlike most of the PR guys, he has been a microprocessor designer and own some IP too.
I always enjoy the PCPer interviews

August 3, 2016 | 09:07 AM - Posted by Sasa Stanic (not verified)

Why are there no 5K tests please?

August 3, 2016 | 10:48 AM - Posted by BlackDove (not verified)

Because 5K is totally irrelevant and is never going to be widely adopted. Rec.2020 is only 4K and 8K.

August 3, 2016 | 10:31 AM - Posted by Joakim Löwenadler (not verified)

In my opinion the price is a bit steep for a crippled GP102. My guess is that depending on improved yields and more importantly what AMD brings to the table, we're gonna see a Titan X Black or similiar (if history is anything to go by) with a fully enabled GP102, and possibly somewhat higher core frequency than the current one. I dunno if it's just me, but when I "know" there is a fully working GP102 with more performance out there, and the fact that they didn't use it for their preminum gaming card, it just puts me off. I would never consider getting one.

August 3, 2016 | 10:37 AM - Posted by Anonymous (not verified)

Amd has a clock problem, even if they released a high end gpu on 14nm they would still be way behind Nvidia because their architecture cannot hit those high clocks. HBM2 will not solve this problem, Amd only hope is the new API's. This is coming from someone with a r9 390x crossfire setup, amd needs a hail Mary card and they needed it yesterday.

August 3, 2016 | 12:23 PM - Posted by Anonymous (not verified)

High clocks do not equate to high performance, just look at AMDs DX12/Vulkan benchmarks at those lower clocks on the RX 480 compared to the GTX 1060. And also Look at HBM's lower memory clocks and HBM's much higher effective bandwidth than any GDDR DRAM clocked at 16 times the rate HBM, and 8 times the rate of HBM2! AMD's Polaris hardware is doing more with its lower clocks relative to what Nvidia is doing with its much higher clocks. Nvidia is throwing more clocks at its hardware for some other reason, and that reason is the lack of some in hardware resources compared to AMD's Polaris micro-arch.

August 3, 2016 | 11:12 AM - Posted by Nisco (not verified)

Why don't review site's don't do any 1920x1080 benchmarks anymore? People that have a 120hz or 144hz monitor are interested in those benchmarks!

And you guys should redesign the website to a 1920x1080 format, these benchmark pictures look super tiny and are irritating to read on a 1920x1080 screen. I know you can use the zoom option, but the majority of the benchmarks graphs looks unsharp then! I wonder how 1440p and 4K monitor people are reading this review.

Steam hardware survey July 2016

37% people on steam still use 1080p screens
31% uses 4K
26% are still using 1366 x 768
1.31% For the 1440p!!! People

So please consider 1080p benchmarks for the future.

August 3, 2016 | 12:33 PM - Posted by remc86007

31% use 4k????? I will bet anybody that number is wrong. There are likely at least ten 1080p screens for each 4k.

August 3, 2016 | 03:04 PM - Posted by Anonymous Nvidia User (not verified)

No he is correct for Feb 2015- July 2016. It includes multi-monitor though.

Over double video cards used were Nvidia 57% to 25%

3 Intel processors to every 1 AMD 77% to 23%

48% Win 10 and dx12. The top Vulkan card is gtx 970 at 10% Surprise. Guess Nvidia users aren't going to let a few fps and lack of async support bother them.

August 3, 2016 | 03:16 PM - Posted by remc86007

I'm not doubting he read the number correctly. I'm saying the number is not at all a representative sample of the population of steam users, much less all gamers.

The GPU and CPU numbers sound accurate to me.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.