Review Index:
Feedback

The GeForce GTX 1080 8GB Founders Edition Review - GP104 Brings Pascal to Gamers

Author: Ryan Shrout
Manufacturer: NVIDIA

GPU Boost 3.0, Overclocking, HDR and Display Support

Overclocking on Pascal and the GTX 1080 will get a nice upgrade thanks to GPU Boost 3.0.  Out of the box, GPU Boost 3.0 will perform nearly identically to previous GPU Boost technologies, increase the Boost clock until the GPU hits a limit of temperature or power. The base clock on the GTX 1080 is 1607 MHz with a rated Boost clock of 1733 MHz and in my testing we saw actual clocks range from 1865 MHz to 1630 MHz or so.

View Full Size

But GPU Boost 3.0 does offer a new feature for more granular overclocking. Both users and AIB partners can now set frequency offsets for individual voltages points rather than a single, global offset for all voltage points a GPU might go through. The result is more efficient use of the theoretical maximum clock speed for the silicon.

View Full Size

To take advantage of that capability, tools like Precision X are going to be updated to allow users to manually set those offsets by voltage points AND will include a tool that will attempt to auto-overclock your graphics card at each voltage point based on parameters you set! Three modes will now be available in EVGA’s Precision XOC.

  • Basic
    • This will overclock the GTX 1080 like GPU Boost 2.0 did – applying a set offset to all voltage points.
  • Linear
    • This will you to set a linear increase in the offset from the lowest to the highest voltage point. You can adjust the grade of this slope so you can have higher offsets at lower voltages that taper off as the voltage increases, for example.
  • Manual
    • As you might guess, this allows users to manually input the offset for each voltage offset / column.

The OC Scanner is the tool and I most excited about. If you have seen or used software-based overclocking tools like the ones that ASUS includes in its AI Suite tools, then the process for the GTX 1080 will seem familiar. Precision XOC will incrementally test clock offsets at each voltage point until it detects instability or corruption in the image, at which point it will set the highest previously stable point as the offset for that specific voltage. It repeats the process for each voltage point on the curve, creating a custom overclocking map for your graphics card.

Options for running OC Scanner include how long it should test each VF point, what offset (MHz) the scanner should start and end at, as well as how granular the voltage steps should be. The more granular you make the voltage steps the more accurate your curve will be and the more stable the overclock, but it will definitely take more time.

NVIDIA does want to warn users that this overclocking/scanning process may pause during a timeout / driver crash. However, the OC Scanner is smart enough to wait for the driver to recover and resume the overclocking process.

Early Overclocking Results

For our overclocking testing, the process was very similar to previous GeForce GTX overclocking sessions using GPU Boost. I used the new version of Precision X, now called Precision XOC, in the basic mode to raise power targets and set a clock speed offset.

View Full Size

First I pulled up the power target to its maximum rating of 120%, giving us, in theory, a peak power draw rat of 216 watts. The temperature target shows 92%...but I am assuming that's a typo and it's meant to be 92C. Even with that raised, our card never went over 85C during overclocking.

View Full Size

Click for Larger Version

It quickly became obvious to me that the GP104 GPU will scale if given the ability. Without so much as a hiccup, our Founders Edition card was able to cross the 2.0 GHz mark with a +200 MHz offset! 

View Full Size

In our stock testing, GPU clocks in Unigine Heaven hovered around 1700 MHz. Combining a 200 MHz offset, along with the power target increase, resulted in a usable clock rate increase of more than 300 MHz! Seeing a GPU run at higher than 2000 MHz on air, without much effort, is still a stunning site.

Of course I wanted to try out the other overclocking modes listed above, linear and the auto scanning. 

View Full Size

As described above, by clicking on the left hand side of the graph and the right hand side, you can adjust the slope of the offset per voltage point. In this example, I created a nearly flat offset, that raises clock speeds higher at the beginning of the voltage curve but minimizes it as you use more power. I'm not telling you this is the idle curve, but it is interesting that GPU Boost 3.0 gives us this kind of capability.

View Full Size

And if you want the ultimate in granularity, manual mode will give you that, enabling an easy click point for each voltage setting to enable different offsets. I randomly picked some as you can see here, including lowering the clock at a couple of voltage points just off center (lower blue bars) to see if that was possible. This mode is really meant for the most hardcore of GPU enthusiasts, as it will require a lot of guess work and testing time.

A solution to that is supposed to be EVGA's Precision XOC OC Scanner mode. 

View Full Size

Click to Enlarge

The goal here is to automatically run through the manual option, having the software increase clock speed and run stability tests in the background. Once it notices instability or artifacts, it would pick the last known good setting and move on to the next voltage point. Unfortunately, in the time I had with it, the process was incredibly unstable, crashing the driver and not auto-resuming as expected. I attempted to use this process a half dozen times or so, including between reboots, but never had more than one or two voltage points complete before the system refused to keep going.

Hopefully NVIDIA and EVGA work on this some more to bring it to a point where we can actually utilize it - it might be the coolest addition to GPU overclocking in years!

HDR and Display Technologies

The next big shift in display technology is known as HDR, high dynamic range, and if you haven’t seen it in person, then the advantages in image quality it offers are hard to describe. The BT.2020 color space covers 75% of the visible color spectrum while the current sRGB standard most monitors are judged by only covers 33%. Contrast ratios and brightness will also see a big increase with HDR displays, with many LCD options going over 1000 nits!

View Full Size

Maxwell GPUs already support quite a few HDR display capabilities including 12-bit color, BT.2020 wide color gamut and the Dolby SMPTE 2084 algorithm to map HDR video to BT.2020 color space. HDMI 2.0b support is also in the Maxwell / GTX 980 GPU that includes support for 10/12-bit for 4K displays.

Pascal adds to that list by adding 4K 60 Hz 10-bit and 12-bit HEVC decode acceleration, 4K 60 Hz 10-bit HEVC encoding and DisplayPort 1.4 ready HDR metadata transport.

View Full Size

NVIDIA is enabling HDR through GameStream as well, meaning you can stream your PC games in HDR through your local network to an NVIDIA SHIELD device connected to your HDR display in another room.

Getting HDR in games isn’t going to be a terribly difficult task, but it isn’t a simple switch either. NVIDIA is working with developers to help them bring HDR options to current and upcoming games including The Witness, Lawbreakers, Rise of the Tomb Raider, Paragon, The Talos Principle and Shadow Warrior 2.

View Full Size

Just like the GeForce GTX 980, the new GTX 1080 can support up to four simultaneous display outputs with a maximum number of output connections of 6 per card. Maximum resolution support jumps from 4K to 7K through a pair of DisplayPort 1.3 connection. GP104 is DisplayPort 1.2 certified and DP 1.3/1.4 ready for when the standard finally gets ratified. It’s a bit risky to claim support for DP 1.3/1.4 without knowing 100% how the standard will turn out, but NVIDIA is confident they have the hardware in place for any kind of last minute changes.

View Full Size

And for those of you keeping track at home of the supported video features of the GTX 1080, a table is supplied above.


May 17, 2016 | 09:10 AM - Posted by Anonymous (not verified)

I'm seriously disappointed the new generation isn't going to usher in a new era of 60fps 4K gaming with max details. It's half-again faster than the 980Ti, so it's a huge step up, but it didn't quite reach my expectations.

May 17, 2016 | 09:28 AM - Posted by Anonymous (not verified)

The reality is that devloper will spend a certain amount of time optimising the game. And if the game runs reasonably well on most hardware and figuration (1080p/1440p) then they're going to stop. As better GPU comes out, so too newer and more demanding games.

I mean, if you take the 1080 and run Battlefield 4 at 4k you can hit 60fps easily now.

May 17, 2016 | 12:43 PM - Posted by Anonymous (not verified)

Yeah... I was expecting... more...

Too much apparently :/

Lets see what $1,000 will buy us in a few months then.

May 17, 2016 | 12:45 PM - Posted by Jet Fusion

Meh. Some people always seem to find a reason to complain i guess.
Most new models have around a 15%-20% performance increase compared to the previous models for many years.
This chip and card seems to increase 3x the standard increase at 45-60% faster. This is simply by looking at base clock speeds.
This is not even bemchmarked. The new GPU and memory are slapped onto each other and both are new tech. The end result might even be better than average +50%.
This is without overclocking and product optimizing.

May 18, 2016 | 04:41 AM - Posted by Anonymous (not verified)

LOl oh please where do you get your numbers from?

May 17, 2016 | 07:59 PM - Posted by Overbuilt Gaming (not verified)

I'm not impressed with it either. The NVidia presentation had the OC at 2100 running 67c, what they didn't show was the fan at 100%, probably in a refrigerator. Thermal throttling kicks in at 82c.

For 980 sli vs. 1080 only a few cases was the 1080 faster, and maybe 1-2% at that.

For those already owning a 980 or 980ti, it is an upgrade yes, but I'm not sure it's $700+.

May 28, 2016 | 01:39 AM - Posted by Jason Elmore (not verified)

Thermal Throttling kicks in at 92C not 82C

May 30, 2016 | 02:01 AM - Posted by Stevebit (not verified)

YOU ARE TOTALLY WRONG THERMAL THROTTLING KICKS IN AT 83c not 92c

June 6, 2016 | 11:57 PM - Posted by Anonymous (not verified)

Yes and No.
You can set the GPU limit on different cards. Not sure about the Founders but I saw an MSI card at 92degC.

The Founders preset was about 82degC.

May 17, 2016 | 09:11 AM - Posted by Msquare (not verified)

RIP 980TI and Maxwell ! You served well.

May 17, 2016 | 11:49 AM - Posted by Trey Long (not verified)

The partners' custom cooled and factory overclocked cards are the way to go. I'm excited to see what insane speeds they can achieve. With a little driver maturity and custom cards is 25% faster than the current reference possible? Could be.

May 17, 2016 | 09:24 AM - Posted by Anonymous (not verified)

I'll hold on until in-the-wild testing can confirm if theoretical VR performance increases can translate into ACTUAL VR performance increases (as VR SLI and Multi-Res Shading did not), but I might be moving up from the 980Ti sooner than I was intending.

May 17, 2016 | 09:26 AM - Posted by steen (not verified)

Nice to see updated Nvidia BenchWorx in action. ;)

May 17, 2016 | 09:31 AM - Posted by JohnL (not verified)

So.......nvidia....where is the promised async compute driver for GTX 900 series after this?

May 17, 2016 | 09:39 AM - Posted by Anonymous (not verified)

Exactly where's the DX12 support for Fermi GPU's. Nowhere and later cancelled :P

May 17, 2016 | 10:15 AM - Posted by Martin Trautvetter

Oh, it's all available in this shiny new box, and for only $699! Also, you get a really nice hardware dongle with it, and can use your old card for Physx or a as paper weight! :P

May 17, 2016 | 12:40 PM - Posted by Anonymous (not verified)

ROTFL...

May 28, 2016 | 03:55 AM - Posted by Nizzen (not verified)

Where is the games that you need DX12 for? LOL
:p

June 6, 2016 | 11:59 PM - Posted by Anonymous (not verified)

Silly rabbit, I only play AotS and refuse to play any other game.

Must be a "red shirt" Star Trek slash AMD joke in there somewhere.

May 21, 2016 | 02:02 PM - Posted by Anonymous (not verified)

Idiot they cant and wont come they cant do Asyn its a flawed card design and Crapvidia announced you all got screwed in the butt hole by them hope you enjoy it

June 7, 2016 | 12:02 AM - Posted by Anonymous (not verified)

The can do asynchronous computations, but they can't do ASync Compute.

NVidia's technology needs to be optimized for. NOBODY has any useful tests yet so it's pointless to scream about the situation.

I mean, did you even read the article?

May 17, 2016 | 09:33 AM - Posted by Anonymous (not verified)

Not bad but not extraordinary also.
It would have been good to show some overclocked results in gaming. It reaches 2Ghz but what of those speeds translates into real performance? Or did I miss something?

May 17, 2016 | 09:34 AM - Posted by Anonymous (not verified)

Any coil whine, Ryan?

May 17, 2016 | 09:56 AM - Posted by JohnL (not verified)

Add one more --> Any 'unannounced' memory segmentation (ROP amount, L2 Cache size too) features?

May 17, 2016 | 10:04 AM - Posted by Lemi (not verified)

No, no, no! That's why this is "Founders Edition". You'll all have to find this for yourself. That's why they reduced the price...oh, wait...

May 17, 2016 | 11:24 AM - Posted by Allyn Malventano

None to speak of aside from the typical 500FPS loading screen sound.

May 17, 2016 | 09:38 AM - Posted by Zurv (not verified)

SLI doesn't work in dx12.. sooo.. maybe take out the hitman and tombraider 980 sli.
in the dx12 test the 980 sli is pretty much just speed of a single 980.

May 17, 2016 | 11:17 PM - Posted by BillDStrong

It does prove the point of the limitations of DX12 and UWP gaming, so I would say leave it in. After all, these are representative of what folks are playing.

May 21, 2016 | 02:11 PM - Posted by Anonymous (not verified)

Another idiot .. it has nothing to do with DX12 ... DX12 made sli and cross for stone aged you can use both cards on series utilizing all of the memory in both cards but then again the DX12 limitations are only subject to Crapvidia AMD runs fine and at a performance boost way above Crap vidia were Crapvidia gives you you OMG WOW 10% improvement ... WOW i want run and get a new paperweight

May 17, 2016 | 09:52 AM - Posted by Cherniq (not verified)

so , faster than 980 sli turn out to be total lie.
160 W tdp on the slive is 180 ....
9 TFlops are actually 8.2 Tflops

And then when the other company do paper lunch all go nusts and hate them here we praise lies and misunderstanding between departments as excuse.
At least Maxuell (excluding 970 fiasco ) was an engineering feet in terms of clock speed and power consumption.
Nothing to see here ( only the software stuff and how they will perform)

I was hoping to get a new card , but I guess I have to wait for next card ( don't matter the camp I want better perf/$ than this ).

May 17, 2016 | 09:57 AM - Posted by Well the usual (not verified)

Well...it's always like that. You use the metrics that most suit your point. We should return to the CEO Math (Pascal will be 10x Maxwell) and see how that hype turned out today :P
But I agree with you.

May 17, 2016 | 11:40 AM - Posted by Anonymous (not verified)

It as fast as 980 SLI when OC to 2ghz

May 17, 2016 | 04:50 PM - Posted by Lemi (not verified)

Sure, OC the GTX 1080 but you don't OC the 980. When overclocked 980 and 980Ti...this won't look half as impressive but it's interesting how reviews are made these days...

May 17, 2016 | 11:39 AM - Posted by Anonymous (not verified)

It as fast as 980 SLI when OC to 2ghz

May 17, 2016 | 10:13 AM - Posted by Martin Trautvetter

Did you note the boost clocks all the cards reached during game testing? I'm wondering if the 1080 would be much of a jump over a 15/20% factory-o/ced 980 Ti.

May 17, 2016 | 05:13 PM - Posted by Anonymous (not verified)

The 1080 is 25% faster without even overclocking.
Overclocked it's over 40% faster.
Overclock the 980ti and the 1080 is still at least 25% faster...

May 17, 2016 | 10:17 AM - Posted by vicky (not verified)

i wish to find more blender cycles benchmark.....

T^T

May 17, 2016 | 10:20 AM - Posted by Prodeous13 (not verified)

Great minds thing alike :)

Definitely want to see some compute test, preferably with Blender Cycles Cuda test, like BMW test ...

May 17, 2016 | 11:59 AM - Posted by Anonymous (not verified)

OpenCL too, no vendor lock-in please!

June 17, 2016 | 09:11 PM - Posted by Admiral Potato (not verified)

I would have to say that these days, the BMW test is one of the least useful performance test blend files you can possibly run. It uses only Diffuse, Glossy, Glass and Emission shaders - which run fast on every card I've ever owned. Try putting it through its paces on the hair simulation scenes in the Blender Foundation's 2016 benchmarks, and that's where you'll see the card start to show its true colors. The Koro(lamaa) and Fishy Cat scenes will be the proof in the pudding for whether it's another lemon at release that will take 10 MONTHS OF DRIVERS AND CYCLES UPDATES (I'M LOOKING AT YOU, 980 TI) to start performing well, or whether it handles things like hair and particles well. I'm also hoping that the next generation of cards starts to perform well with volumetrics - My 780 TI cards still offer the best performance per card on all complex shaders.

Blender Foundation's February 2016 Cycles Performance Test:

https://code.blender.org/2016/02/new-cycles-benchmark/

May 17, 2016 | 11:55 AM - Posted by Anonymous (not verified)

Blender cycles benchmarks for Windows and Linux, and whetever Vulkan API stuff that the Blender Foundation may be using in the future!

And More, much More, Linux(Steam OS, others)/Vulkan gaming, and Blender, graphics benchmarks.

Vulkan will it Blender?

May 17, 2016 | 10:18 AM - Posted by Prodeous13 (not verified)

Wonder when you plan to test computing capability. I so want to see Blender in Cuda, just to see how it compares to other Nvidia cards and if we truly will see its improved FLOPS.

May 17, 2016 | 10:35 AM - Posted by StephanS

This is depressing, nividia pretty much did a finishing move, mortal combat style, on AMD today :(

AMD GPU market is going to shrink even more now that nvidia is firing on all cylinders...

AMD will have to drop prices by 20 to 40% or sales will plummet to nothing. Because it seem the GTX 1070 will easily beat the Furyx and will be priced $200 cheaper !
So AMD will need to drop the price of the FuryX by $300 (half the ram, more power consumption, and slower)

Makes you wonder if the Fury line can even be profitable at this stage and AMD might have to cancel production. Oh my.. this is bad.

This mean AMD cant sell any Fury or R9 without loosing money,
and polaris will be relegated to the new mid/low end class.
Where margins are thin.

May 17, 2016 | 10:56 AM - Posted by Oskars (not verified)

Get a Nvidia card for the advertised pricing and then say - Amd has to lower this and that.
We have no info on the real availability of the new 1080, to say it will affect pricing of any card, be it AMD or Nvidia. Could very well be priced from 750 to 800 american dollars for a few months, and that price won't compete with any card on the market.

May 18, 2016 | 02:24 PM - Posted by StephanS

A consumer will wait if he/she knows the 1080 got an MSRP of $599 and only see card that are half as fast and cost $650

Pay $150 more for a slower card with half a the ram ?

But yes, its possible that people value the 1080 much more then $600
Nvidia will sell out of the $800 model because compared to a $650 fiji its a super deal. (FPS per $)

I'm sorry, but I dont see AMD getting out of the hole with its upcoming lineup. And nvidia will drive AMD prices so low, that its margin are going to be so terrible...

May 17, 2016 | 10:58 AM - Posted by Martin Trautvetter

I think AMD didn't need a finishing blow, they did plenty of damage to themselves by the whole Fury debacle (in hind-sight) and by simultaneously raising prices through their 8GB Hawaii rebrands.

I guess they're now shooting to be the king of 60W GPUs, so a hearty *shrug* to them.

May 18, 2016 | 02:20 PM - Posted by StephanS

AMD target polaris parts for the low end, yes.
Large 14nm chips with low clock will compete, but the margin is going to be horrible for AMD.

And I agree AMD pricing is ridiculous... At retail AMD want you to pay more for a slower and hotter card. This is why nvidia is turning billion in profit and AMD GPU division is turning deep red.

The r9-390x was a HUGE mistake... AMD only had to tweak the 290x.. by NOT overclocking the beast and dropping the price to $260
AMD also dont beleive that software optimization is profitable

AMD mentra "Slow, un-optimized drivers? .. we can fix that by making bigger GPUS"

May 17, 2016 | 11:54 AM - Posted by tatakai

$700 card isn't going to affect AMD much. And the volume is likely not going to be that big even if it would affect AMD. Not a finishing move. If nvidia won the sub $350 market, then it would be a finishing move. But it looks like the 1060 is a ways off and the 1070 might not have a clear win (and costs too much).

There is also the case of OEMs. AMD is poised to offer OEMs new affordable parts to market to their customers. Nvidia going after the high end means they aren't

May 18, 2016 | 02:01 PM - Posted by StephanS

$600 for the OEM 8GB models. 4GB Fiji is $650, and is much slower.
It is at most 1070 performance level.

The issue is that AMD can keep the Fiji alive in its lineup by lowering its price, because the price drop would need to go from $650 to about $280 to $350. Making the card non profitable.

In contrast, its possible to keep other 28nm card onthe market.

nvidia will continue to manufacture and sell 28nm in its lineup.
Because maxwell is very cheap to manufacture. nvidia can drop the GTX 980/970 by $100 and still make very good profits.

Another issue, TSMC 16nm seem to clock much MUCH higher then AMD 14nm process. This mean nvidia call sell a 1.6ghz part while AMD is stuck at 1.3. a 25% gain in clock + a better architecture give nvidia a 30 to 40% gain in stransistor performance.
AMD will need chips 30 to 40% bigger to compete, and in turn reduce their margin.

And problems dont stop here. nvidia already gave final cards to reviewer.. with AMD with dont even know the paper spec of their polaris 10.. let alone the card that will replace the Fiji line.

So I dont see AMD gaining any market share this year, and AMD GPU division as been loosing money so if things gets worse. Things are not looking food at all :(

May 17, 2016 | 04:17 PM - Posted by thatactuallyguy (not verified)

What are you talking about? This is next generation, of which AMD will be releasing in less than a month. Your comparison would be like if you proclaimed the Xbox division dead when the PS was released because it was more powerful than the Xbox 360, when the XB1 was only a week away. Pascal is competing with Polaris, NOT Fiji, and if rumors are true then the first Polaris cards will release less than a week after the 1080 hits.

May 18, 2016 | 02:13 PM - Posted by StephanS

You believe that ? We dont even have the paper spec for the mid/low end polaris 10.

AMD has no date was announced for a card to compete with the 1070 or 1080.

The Vega is only 2017 at this time, and no spec.

With AMD its all rumor and speculation. nvidia actual gave production board to reviewer already.

If AMD has 6 month gap to compete with the 1080, AMD will have lost the high end market. And margin are going to SUCK for AMD in the low end because nidia can do so much more per transistor and 16nm TMSC seem to clock much higher...

I personally think AMD is entering its biggest challenge.

June 9, 2016 | 10:59 PM - Posted by 7Fred7 (not verified)

Sure looks that way. For obvious reasons, it's very bad news for everyone (except Nvidia) if AMD can't retain enough market share to move forward with r & d as a serious competitor. Either AMD starts pulling rabbits from hats, or we all lose out. People should bear that in mind when they switch to fan-boy mode.

May 17, 2016 | 05:18 PM - Posted by ppi (not verified)

Polaris will not be relegated to mid/low end class, because it was designed for it. Both Polaris 10 and 11 are smaller chips.

Agreed that AMD can now stop bother producing Fury line. Same for 980Ti and after intro of 1070 also for 980 and 390X. Polaris is probably going to do the same for 970/390 and all the below, stopping above Intel's integrated GPUs.

May 18, 2016 | 02:09 PM - Posted by StephanS

They have to stop production og Fiji, or it will be a financial disaster.

But the issue is, we dont even know the spec of polaris 10.
Yet the 1080 card where reviewed already and we have a hard release date.

Who will spend $650 on a slow 4GB fiji, when they can get a $600 GTX 1080 ?

And polaris 10 is expected to be half the speed of the 1080.

So AMD will have no card to compete with the 1070 and 1080 until 2017.

May 18, 2016 | 09:34 AM - Posted by Lampecap (not verified)

Ah its not as bas as it looks. Maybe at the moment, but you need to look further. This guy really seems to know how it all hangs together. Kind of like a masterplan for AMD.

https://www.youtube.com/watch?v=0ktLeS4Fwlw

AMD will make a comeback in the coming years, maybe not at the high-end at first, but their architecture controls almost all of the total gaming market through consoles. Just watch the video's ^^

May 18, 2016 | 02:06 PM - Posted by StephanS

But that advantage is irrelevant now.
Pascal is doing amazingly well in Dx12. console optimization will now benefit nvidia more then AMD.

AMD 14nm process also seem worse then TSMC 16nm.
And TSMC already got 10nm very far along, so nvidia might be able to jump node much earlier then AMD for the next gen.

nvidia is untouchable at the mid/high end (that include the HPC)
and can make smaller chip then AMD for equal performance at the low end.
AMD will have to really lower their margin to compete in the low end. This might not be enough to make AMD GPU group profitable again :(

May 18, 2016 | 07:51 PM - Posted by Anonymous (not verified)

I wish you could collapse troll threads here.

May 21, 2016 | 02:17 PM - Posted by Anonymous (not verified)

lol are you in for a surprise i just cant believe how many crapvidia fan boys are blind sighted .... you all deserve what you get more money than brains for a whole % fps less youll go spend more money to support a ling cheating piece of crap company the rapes you at every turn ... wow what trolls .... and idiots to boot ...well very entertaining forum but i want real facts not under the bridge trolls that don't know their ass from a hole in the ground ....

May 17, 2016 | 10:40 AM - Posted by mikesheadroom

Good work guys! I enjoyed the video review/summary.

May 17, 2016 | 10:42 AM - Posted by BrightCandle (not verified)

There is a part of the Async compute diagram that isn't making a lot of sense to me.

The entire point of async compute was to allow the simultaneous running of graphics and async shaders to get better utilisation or resources so the shaders were not sat idle, but the shaders aren't capable of doing the "graphics" work since its fixed pipeline work. In effect what the dynamic diagram ought to be showing is an additional compute workload happening after the first one, not the compute taking over graphics work that doesn't make any sense.

I don't think the issue with Maxwell's implementation was really clear and these slides aren't helping clear up the situation. Do you have more information about how this will work? What am I missing here?

May 17, 2016 | 10:50 AM - Posted by Anonymous (not verified)

No... That's not the point. That AMD tries to sell you, but that's not what asynch means.
It's been explained better at other places that async does not require parallel.

May 17, 2016 | 10:43 AM - Posted by Anonymous (not verified)

"Similarly for compute tasks, Pascal integrates thread level preemption. If you happen to be running CUDA code, Pascal can support preemption down the instruction level!"

So what they may be saying is that its improved, but that it's not fully hardware based, and that single instruction preemption needs CUDA code to be of any help for debugging at the single instruction level(AKA single stepping through code in debugging mode)! Most certainly Nvidia has improved on some thread level graphics/compute partally in hardware scheduling and that will result in better GPU hardware execution resources utilization than the previous Nvidia generations.

I do not like the sounds of that “happen to be running CUDA code” as that smacks of a vendor specific proprietary solution that forces others into the CUDA ecosystem in order to obtain the ability to look at things at the instruction level. How is this going to play out for Vulkan/other API debugging, as well as OpenCL, or other cross platform open code/graphics APIs/other code that may not be using CUDA.

There is going to have to be a serious comparison and contrast of the in hardware async-compute features of both Polaris/Vega, and Pascal/Volta and it cannot wait for the Hot Chips Symposium white papers and other professional trade events.

Any GPU processor thread scheduling/dispatch done in software is just not going to be as responsive to any sudden asynchronous events that might occur at the hardware/instruction level as that which is done fully in the hardware buy specialized in hardware GPU processing by a hardware based thread/instruction scheduler/dispatch and context switching unit. No amount of trying to hide latencies for asynchronous events in software can result in as efficient and as rapid of as response to an asynchronous GPU processing thread event as that which in fully implemented in GPU's/any processor's hardware! Without the fully in hardware asynchronous compute processor thread scheduling/dispatch and context switching there will be idle execution resources, even with work backed up in the processor’s thread scheduler queues! Most software based scheduling, for lack of fully in hardware based units, has an intrinsic deficiency in the software's ability to respond at the sub-single instruction level to any changing event in a GPU processing units execution pipelines(FP, INT, and others) like having the fully in hardware async-compute units does.

Read up on Intel's version of SMT(HyperThreading) to see how async compute is done fully in hardware, and async compute done fully in a GPUs processor thread dispatch/scheduling/context switching units has a large advantage over any software, or partially in software processor dispatch/scheduling/context switching for asynchronous compute. The fully in hardware based asynchronous compute has the fastest response to any asynchronous events, and the best processor execution resources utilization possible!

May 17, 2016 | 11:04 AM - Posted by Anonymous (not verified)

P.S. true hardware based asynchronous compute is fully transparent to any software(except the ring 0 level of the OS Kernel, Mostly for paging/page fault events and other preemptive multitasking OS context switching/hardware interrupt handling events) and is fully implemented in the processors hardware for CPU/GPU hardware processor thread scheduling/dispatch/context switching!

For a discrete GPUs the OS is in the card's firmware(mostly) and GPU drivers, and runs under control of the system's main OS/driver/OS Driver API(WDDM for windows, Kernel drivers for Linux) software stack.

May 17, 2016 | 10:45 AM - Posted by Oskars (not verified)

A lausy performance per $, given that the GPU die is ~314mm2 vs the earlier 398mm2, must be comparable production costs, and even lover overall GPU manufacture costs compared to 980, given the memory prices have lowered for a while now.
And those minimum framerates ar also wery far from the avarage, really unbalanced card, or maybe just the fault of the drivers? But even so - dissapointing.

May 17, 2016 | 12:15 PM - Posted by Kjella (not verified)

A lausy performance per $, given that the GPU die is ~314mm2 vs the earlier 398mm2, must be comparable production costs

Which is neither price nor performance, so how can it affect price/performance?

May 17, 2016 | 12:25 PM - Posted by Oskars (not verified)

Given the pascal clock for clock is the same as maxwell and price will be well over the advertised 700 usd - lausy performance per dollar.
Better?

May 17, 2016 | 04:11 PM - Posted by Allyn Malventano

Good luck running Maxwell at >2GHz.

May 17, 2016 | 04:17 PM - Posted by Allyn Malventano

This is how every industry die shrink happens. Up-front development cost must be recouped somehow. Same happens with Intel.

May 17, 2016 | 10:55 AM - Posted by Spacebob

Are there real price drops for the 970, 980, 980Ti coming? I don't see how those cards can stay at their same prices. If the prices do fall what happens to the performance per dollar of the 1080?

May 17, 2016 | 11:03 AM - Posted by Martin Trautvetter

I don't think you'll see formal price cuts on the earlier models. Rather, they'll simply cease production and sell through whatever inventory still remaining in the channel via clearance pricing.

May 17, 2016 | 11:09 AM - Posted by JoshD (not verified)

I can't believe how Nvidia distorts the truth. They're lucky their products are pretty good (just not quite as good as they pretend they are), or there would be a big consumer backlash.

May 17, 2016 | 04:12 PM - Posted by Allyn Malventano

What distortion have you seen here that is not in line with the typical PR preso spin applied from either camp / any product launch?

May 17, 2016 | 04:48 PM - Posted by JoshD (not verified)

Your probably right that the marketing spin on things is expected at this point. But I still think it's slimy. Like holding up a $699(founders) GPU and claiming the price is $599. Until we see otherwise it's a $699 USD GPU to me (the founder's premium is even worse in the UK)

May 18, 2016 | 07:57 PM - Posted by Anonymous (not verified)

I suspect the founders premium is due to constrained initial supplies, so good luck getting one, even if you are willing to pay the price premium.

May 17, 2016 | 11:14 AM - Posted by Holyneos (not verified)

How much faster is the 1080 vs a non reference card like a G1 Gaming,Asus ROG Matrix GTX 980Ti,etc. You only benchmarked it with the 980ti reference.

May 17, 2016 | 11:43 AM - Posted by Allyn Malventano

It's looking like the overclock headroom is similar between the different models, so a non-reference 1080 would scale against the other non-reference cards proportionally, meaning our reference to reference comparisons will be similar to hypothetical non-reference to non-reference comparisons.

May 17, 2016 | 11:29 AM - Posted by Anonymous (not verified)

"capitalism states"

More like the laws of Supply and Demand, and supply and demand works across many economies where there are no price controls in effect, Mercantile like trade restrictions, or centralized communist/other unreasonable market controls. its much more like the free market states(with resonable restrictions to support fair and open markets) and the laws of supply and demand. Crony Capitalism can and does run counter to the intrests of the free market.

See: The Standard Oil Trust, Intel vs AMD, M$/M$'s OS market share vs the third party PC/Laptop OEM market, Intel too, with the third party PC/Laptop OEMs. As prime exaplles of Crony Capitalism at its worst!

May 17, 2016 | 11:30 AM - Posted by Adam (not verified)

Ryan - How about doing DX12 benchmarks - AMD PRO DUO vs GTX 1080. That is AMD's Competition vs Gtx 1080 for next 7 months. Do this for us - your fans. DX12 only Please :)

Your Fan

May 17, 2016 | 11:37 AM - Posted by Allyn Malventano

I hate to disappoint, but since there is currently no multi-GPU support in DX12, your PRO DUO vs 1080 comparison would end up looking like a Nano vs 1080, which would look extremely bad for AMD.

May 18, 2016 | 04:16 AM - Posted by albert (not verified)

You said there was no support in DX12 yet you assume AMD will do worse ???? Love the bias against AMD Allen/Allyn, keep it up. And it comes as no surprise that bias continues on in the reviews at PCper as per usual !

May 18, 2016 | 05:09 AM - Posted by Pyrolit (not verified)

Don't be dumb, he said DX12 doesn't currently support

    multi

GPU.

Since the PRO DUO == Dual Nano in Crossfire you will only get the performance of one of them.

The 1080 is a single GPU card so its performance doesn't take the same hit.

May 19, 2016 | 06:54 AM - Posted by Paul EFT (not verified)

Oh be quiet.

May 19, 2016 | 06:54 AM - Posted by Paul EFT (not verified)

Oh be quiet.

May 17, 2016 | 12:04 PM - Posted by tatakai

pro duo is not its competition. I mean for those with the cash a pro duo might seem attractive but its more a professional card for certain things.

Wouldnt even be a fair comparison in a game that support crossfire well, pro duo would win.

I don't think there is competition from AMD for the 1080 right now. Vega will obliterate it but the only value in that right now is to make people think all your GPUs are better. Halo product effect.

May 17, 2016 | 07:02 PM - Posted by Allyn Malventano

Yes for multi-GPU enabled titles, Pro Duo would win, but for its cost you could SLI a pair of 1080's, which would in turn beat the Pro Duo.

May 17, 2016 | 05:48 PM - Posted by Anonymous (not verified)

No the PRO DUO has been repurposed by AMD as a development SKU with a very nice option of getting to use the Pro Graphics drivers with the PRO DUO and not having to pay twice as much, or even more, for a FirePro full professional SKU. And RTG will be using some Vega SKUs to compete with the 1080/others! Polaris is RTG's for mainstream branding, to maybe get AMD's Polaris GPU SKUs into some laptops, and there are probably only 2 Polaris M400 series offerings that are not re-brands, so maybe only 1 mainstream Polaris mobile SKU, and one high end laptop Polaris based variant, and some other Polaris mainstream desktop variants.

You will have to wait at least a year before there is enough DX12/Vulkan titles, and enough DX12/Vulkan tweaking done on games that are already DX12/Vulkan enabled for all the necessary tweaking of gaming engines/game titles for both DX12 and Vulkan. Top that off with the need for all the benchmarking software to catch up to the new DX12/Vulkan APIs, that are themselves still undergoing tweaking and updates for the latest GPU hardware!

All you need to do is to suspend you belief of any website's declaring a winner for the next year! So don't believe any Hype for a year, and you Know that any Marketing is just the Hype from a “profession” that traces its roots back to the snake oil salesman, and much further back to when the first lie was told!

Watch out for those lies of omission, there are plenty out there who will be providing only some very cherry picked metrics to spin things positive by omitting important facts that other results will show, so read as many reviews as you can but wait a year to see the truth!

May 18, 2016 | 02:20 AM - Posted by khanmein

allyn, what's the reason behind u guys use driver 348.13??

May 18, 2016 | 01:49 PM - Posted by Allyn Malventano

It was the newest we were provided at the time of testing. OC was done with a slightly newer build.

May 17, 2016 | 11:46 AM - Posted by khanmein

ryan, got any benchmark for encode/decode h.265 like handbrake etc? thanks..

May 17, 2016 | 07:02 PM - Posted by Allyn Malventano

Handbrake probably has to be updated to support this first, no?

May 18, 2016 | 01:43 AM - Posted by khanmein

yeah true also.

May 17, 2016 | 11:52 AM - Posted by Anonymous (not verified)

That is one powerful card, just wished it cost $399. The only problem with this card is that is still cannot run all games and 4k@60 fps, so if that is what you are looking for you will have to wait for the big chip. That being said if you intend to run sub 3440x1440p monitor for the next few years this card will hold you over for sometime.

May 17, 2016 | 12:04 PM - Posted by Shambles (not verified)

I won't lie I'm a bit let down. A ~25% performance increase at the same price point isn't bad but i was expecting more for a double node shrink. The efficiency increase is great but not really what matters to me for desktop gaming.

May 17, 2016 | 12:16 PM - Posted by BrightCandle (not verified)

The issue is really the price. Its a small die and so a 980 class GPU (980 was 394mm^2 and this is just 314mm^2) and hence its performance improvement is pretty good increases at the die size. But the issue is its priced like its a 600mm^2 die like the 980 ti and vastly higher than a 980 was on release.

Its price is what is disappointing really, its performance is about right for a node shrink, but then I think a lot of us were kind of expecting 20nm + FinFET (16nm as they call it) to be 2 node jumps in one and hence potentially give us 4x performance, but its clearly not that good.

May 17, 2016 | 12:38 PM - Posted by Josh Walrath

Eh, 16FFP is more than just 20nm w/FinFET.  Sure, the back end metal layers are using the 20 nm process, but litho and etch are much, much different.

May 18, 2016 | 08:06 PM - Posted by Anonymous (not verified)

Over 300 mm2 is a big die for 16 or 14 nm. Don't expect 600 mm2 die size range in the consumer market without truly ridiculous prices. Yields for the same die size will not be comparable to 28 nm parts. Nvidia's 600 mm2 part is in the $15,000 dollar range, AFAIK.

May 17, 2016 | 12:28 PM - Posted by Zapin (not verified)

The performance definitely does not match the hype (especially the claims saying it beats 980 SLI) but still an amazing card because of the competitive price point.

Still the fact that they are trying to bring proprietary API features into VR will keep me from supporting them as a company. Hopefully, the competition will be able to step up their game also.

May 17, 2016 | 05:11 PM - Posted by snook

still not the second coming of the christ, but, yeow, that 2+ Ghz is godly.

why no ashes of the singularity?

edit: AoTS isn't included because the 1080 is not putting in work there, yikes.

May 17, 2016 | 12:34 PM - Posted by Anonymous (not verified)

You are saying Fast Sync should only be used when frames are high. So what is considered a baseline for when you should use it?

May 17, 2016 | 07:05 PM - Posted by Allyn Malventano

If you are *just* above the panels max, the issue is that slight judder is introduced, which subsides as you get to 2x-3x max. That said, the judder will probably be imperceptible at panels with very high max frame rates (144). The benefit is that it comes closer to v-sync on tear fire with less latency added when >max panel rate.

May 17, 2016 | 12:49 PM - Posted by Anonymous (not verified)

"Either NVIDIA was over-designing for the Founders Edition or another GPU that is more power hungry may be coming down the line."

Ooooor, they intend to use the same PCB for a datacentre-dedicated card intended for rackmount chassis, where end-mounted power connectors are required to fit inside a 3U chassis. Comp[are the Titan X and Quadro M6000 PCBs, for example: identical, other than placement of the power connector.

May 17, 2016 | 12:57 PM - Posted by Anonymous (not verified)

Fast-Sync is just what OpenGL with Triple Buffering is doing. Good for NVidia if they got it working under Direct3D, but this is nothing new they invented, is has been around for years.

May 17, 2016 | 07:07 PM - Posted by Allyn Malventano

Fast sync != triple buffering, unless somehow OpenGL does it differently than anyone else???

May 17, 2016 | 01:03 PM - Posted by Anonymous (not verified)

I can't wait to see the non reference cards.

May 17, 2016 | 01:43 PM - Posted by gerard (not verified)

Be interesting to see if its asked if nvidia thought it was slightly disingenuous to show a card (presumably) fully loaded on stage running at 2.1ghz and 67c only for the retail products to be no different to the last few cards with an 80c+ operating temperature.

May 17, 2016 | 04:13 PM - Posted by Allyn Malventano

They had fans at 100% speed, which anyone is free to do with their own card and get similar results (we saw similar with our card here).

May 17, 2016 | 06:54 PM - Posted by Anonymous (not verified)

That's a dumb excuse. Nvidia didn't mention that in the press conference.

In which case I can show case any card ever made crank the fan to 100% and have PCPerspective amazed at how low temp are and then come review time skip it entirely.

May 17, 2016 | 07:12 PM - Posted by Allyn Malventano

It's a pretty reasonable assumption that any hardware company is doing a live demo of their product is going to crank up the configurable options to the max for that demo. It would have been deceptive if there were some NV-only options that were not consumer facing in the retail product. That's not what happened there. What they showed is achievable, so it's fair. AMD has demoed products in windowless cases that press were not allowed to look inside, so at least we saw the products at this one.

May 18, 2016 | 02:29 AM - Posted by khanmein

https://www.youtube.com/watch?v=wlSeHCPd75s

i personally don't believe that the fans is crank up to 100%

May 17, 2016 | 01:50 PM - Posted by Anonymous (not verified)

I didn't have time to watch the video yet. Do they say if its made by Foxconn like the Best Buy exclusive Nvidia branded cards were?

If they are im definitely waiting for the custom 1080 from Gigabyte. I refuse to give Foxconn any of my money if possible.

May 17, 2016 | 02:26 PM - Posted by Anonymous (not verified)

now i'm even more less upset that i had to buy a 970gtx to get my pc gaming on. Yes it was pricey, but it runs the console ports just fine for now.

looks like 4k is still 2 years away

3.5 memory doesn't look like the deal breaker it once was

E3 is next up in June... lets see if console games are going to up the ante

May 17, 2016 | 02:34 PM - Posted by Anonymous (not verified)

NVidia benchworks. As if Pro Duo "review" wasnt enough.

May 17, 2016 | 03:13 PM - Posted by draak (not verified)

I don't care for this new card as it won't do solid 60fps at 3440x1440, but I really like your T-shirt in the video!
:-)

I've got 980ti overclocked with ultra-wide display (2560x1080), and I'm waiting for a decent monitor that will support G-Sync. 4K never worked for me as human cone of vision covers too small area when sitting 1m in front of the screen.

I will be waiting for HiDPI 3440x1440 screen with G-Sync, only then I'll upgrade.

May 17, 2016 | 03:20 PM - Posted by Anonymous (not verified)

All I'm seeing is what a good deal 2x gtx980 sli or gtx1070sli might be in the very near future!

May 17, 2016 | 04:42 PM - Posted by Anonymous (not verified)

Did Ryan schedule another vacation ?

6 GameWorks games
1 Gamer Evolve

You kind of hope PCPerspective would be more professional after Allyn had to go on the last podcast and say "Ryan didn't have enough time to get to the other games" and pretend he had a list in front of him with a balance number of games to be tested.

May 18, 2016 | 01:54 PM - Posted by Allyn Malventano

The game choices look to be consistent / proportional to the popular titles out right now. We have DX12 / AMD results included here.

May 17, 2016 | 06:05 PM - Posted by DaKrawnik

So many butthurt AMD fanboys here. Pathetic...

May 17, 2016 | 06:23 PM - Posted by Anonymous (not verified)

980ti amp extreme owner plus 750 ti ftw, even if think this should cost less, about $100 less, I mean even the 980 launched at $549 and it too was better than the Titan black and better than w 680s in slightly so stop trolling!

May 17, 2016 | 06:24 PM - Posted by Anonymous (not verified)

680s in sli*

May 18, 2016 | 09:10 PM - Posted by Anonymous (not verified)

I don't see much of that here other than a few trolls. This review, while interesting, isn't useful yet. Only an Nvidia fanboy would purchase one of these cards without waiting to see what the real competition looks like.

May 17, 2016 | 06:19 PM - Posted by Anonymous (not verified)

Temps left out for a reason?
What kind of review leaves out Temps?

May 17, 2016 | 06:31 PM - Posted by Anonymous (not verified)

Appeasing ones.

The 1080 is the same. it reaches 83C and it starts to down clock lower then Boost clocks. Remember the attention Ryan gave the 290X reference for variance clocks.

I would hope Ryan isn't a hypocrite and writes as many articles as he did back then so covering the issue so Nvidia can address it.

May 18, 2016 | 01:59 PM - Posted by Allyn Malventano

I don't follow here, was there a significant issue with 1080 temps? We did power testing rivaling the other reviews out there. Given that, and how little the 1080 draws given its performance, I can see how Ryan didn't need to focus as much on temps.

May 21, 2016 | 07:15 AM - Posted by Silver Sparrow

For a comprehensive review, Ryan ought to have included it. Whether it's deemed necessary is up to the viewer/readership.

Anonymous post or not its a fair comment.

May 22, 2016 | 11:54 PM - Posted by Anonymous (not verified)

A handful of german site are reporting gaming over a period of time longer then a quick benchmark the clocks are settling in below boost clocks and on several main titles the clock is at base level. Very few exceptions where the clock exceeds boost clock level after gaming for a handful of minutes. Tom's Hardware shows after a 8min test in metro last light it settles in between the base and boost on a test best. When ran in furmarks it goes below base clock

http://media.bestofmicro.com/W/3/581763/gallery/01-Clock-Rate_w_600.png

Ryan needs to keep an eye out on the $599 cards if they cutting back and aren't performing as well in games not just on quick benchmark runs.

May 18, 2016 | 09:07 PM - Posted by Anonymous (not verified)

The Nvidia boost clock abilities (I forget what it is called) make for misleading comparisons. AFAICT, the current 28 nm parts often boost clocks up higher that the specified stock boost clock. The stock GFLOPS ratings are not what you get, so Nvidia isn't magically somehow that much more efficient. The only time most people have a problem with clock throttling is if it throttles the clock below the specified base clock. It is generally expected the it should have sufficient cooling to never drop below the specified base clock. I don't really like these policies, since when overclocking is essentially the default, you don't really know what you are buying. It would be difficult to sell 5 different version of the same card at different clock speeds though.

May 17, 2016 | 06:33 PM - Posted by Anonymous (not verified)

Stop drinking the nvidia Kool-aid Ryan, consistently comparing the 1080 to the 980 like nvidia wants you to. You should be comparing it to the 980 TI if you want to give a true impartial review. 980 TI had the same launch price, and is consider to be the card to beat.

May 17, 2016 | 07:17 PM - Posted by Allyn Malventano

980Ti launched at a higher MSRP. Also, I included 980Ti in VR results. 1080 is marketed as their mid-level GPU (as in there will be a 1080Ti), so 1080 to 980 is fair...

*edit* wait a sec, Ryan *did* test 980Ti. What point were you trying to make exactly again?

May 17, 2016 | 07:57 PM - Posted by Anonymous (not verified)

Unfortunately Nvidia hasn't given a date to the $599 1080. What we have is the $699 Founders Edition.

As of now the 1080 will launch higher then the 980 TI.

May 18, 2016 | 04:19 AM - Posted by albert (not verified)

Will Nvidia extend the time for founders edition sales and push back non founders cards.........you bet ! Watch for customers getting pissed off once again !

May 17, 2016 | 07:24 PM - Posted by Anonymous (not verified)

Nice higher end enthusiast card. Can't wait for the 1080ti and Titan variants that will eventually follow.

May 17, 2016 | 08:35 PM - Posted by Garry (not verified)

When the prices drop substantially, a second 980Ti will be my choice.
Rather that than ditch the current card..

Pity there wasn't SLI results for 980Ti SLI shown too. Yeah, no SLI for DX12, but like a lot of people I'm still on Win7.

May 17, 2016 | 08:55 PM - Posted by hoohoo (not verified)

Good review, thanks!

May 17, 2016 | 11:32 PM - Posted by Anonymous (not verified)

Holy crap this review is good. New DX12 tool, VR test and power measuring. The amount of data crammed in here was fantastic. Great work Ryan, Allyn, et al.

The GPU ain't bad neither.

May 18, 2016 | 01:06 PM - Posted by Jeremy Hellstrom

Thanks, it is nice to see someone appreciate how much work it takes to put those together.

May 18, 2016 | 02:01 PM - Posted by Allyn Malventano

Thanks! It was a lot of work for sure!

May 18, 2016 | 12:42 AM - Posted by Anonymous (not verified)

Kind of disappointing for a new architecture on a double node shrink. Also worrying pricing: if this midrange card is $700, are we going to see $1000 high end (1080ti)? $1500 titan?

The 980ti was released at a lower price and not that much lower performance. One would expect two node shrinks and a $50 increase to provide more than 20% or so additional performance. Even if this is a $600 card, that's still not much performance considering how long we've been waiting for those shrinks - and this is not a $600 card. The product that Ryan received and reviewed costs $700. Sure, you will be able to get similar cards for less in the future, but that's the case for all graphics cards, and you can't judge a product based on future price drops.

The tech (and especially frequency) is pretty nice, but this is a midrange card released for above high-end price.

May 18, 2016 | 06:45 AM - Posted by Anonymous (not verified)

Currently the fastest single GPU gaming card = midrange? Lol.

If you don't like the price don't buy it. There's plenty of opportunity for competition to come in and offer a better product at a lower price. Until that happens Nvidia will extract as much profit as possible.

May 18, 2016 | 08:54 PM - Posted by Anonymous (not verified)

I suspect supply is going to be constrained, which is probably most of the reason for the higher Founder's edition price. If the Ti version is based on the 600 mm2 part, then I would expect prices to be ridiculously high. Yields on 16 nm will not be anywhere near as high as it is for 28 nm parts. They probably do have a huge number of defective parts though and these may be functional enough to be sold as a 980 Ti of some flavor eventually. Perhaps they will have multiple configurations of such salvage parts. It will have to wait for HBM2 production to reduce memory prices before they can make a consumer grade part.

Considering the tone of the reviews, it was probably worth it to do an early launch, even if it ends up being a bit of a paper launch, to get all of the favorable publicity. This way, Nvidia's cards get compared to old 28 nm parts rather than AMD's 14 nm parts. It is obvious that the 16 nm part should have significantly better power efficiency than 28 nm parts. Comparisons with 28 nm parts are mostly irrelevant. We don't how this compares until we have AMD parts for comparison.

May 18, 2016 | 08:53 AM - Posted by lucas.B (not verified)

Can you add another DX12 title please ashes of the singularity because we only have one real DX12 title Hitman,until we have another game to try out
(gear of war was just a bad port... so I guess it doesn't really count)
Because it looks like the ASYNC is closing the gap in fps.
I know Nvidia is using a different type of "async" let's say on the software side of things.
It doesn't really matter how they do it, what will at the end is how good games will run.

May 18, 2016 | 08:55 AM - Posted by Anonymous (not verified)

Will PCPer conduct GPGPU testing for both CUDA and OpenCL performance?
I understand the 1080 is marketed toward gaming, but would be great for video processing as well!

May 18, 2016 | 12:47 PM - Posted by Scott Coburn (not verified)

Wow, a lot of whining go on here in the comments. You know if Ryan didn't benchmark the game you are interested in seeing, PC Per isn't the only review site. There are 1080 reviews on all the major sites: Guru3D, HardOCP, Tweak Town, Tom's Hardware, etc. All the whiners are acting as if they have to be paying registered users of other sites to view their reviews, so they want PC Per to benchmark the games they wanted to see. Literally no intelligent person gets their news / product reviews from a single source. There is nothing wrong with this review. What is the problem is that the results don't fit the whiners agenda. So what would you have Ryan do, find the 1 or 2 games that AMD can compete against the 1080 with and only benchmark them. I'm not sure there are that many games the Fury X could compete with the 1080. So that review would be very interesting, here are all the new technologies that come with this new GPU, these are the hardware specs, oh and sorry no benchmarks today (whispers behind his hand "because AMD couldn't beat or compete against it"), so sorry.

As to pricing, sure it could be cheaper, so could Tesla's Model S, or anything that's new and awesome. However that's not how things work. If you don't like the pricing, don't buy one. Free market will dictate how much they go for. So if nVidia finds these are sitting on the shelf for a month with little turnover, then the price will come down. Don't hold your breath for that to happen though. These $700 Founders Editions are going to fly off the shelf. For one reason, anyone wanting to water cool these right away is going to want them. It's almost guaranteed that there will be waterblocks for these "reference" cards on day 1. Even if nVidia had to run the fan at 100% to hit 2.1GHz, that's still on air. I can't believe anyone is complaining about a GPU that can run @ 2.1GHz on air. Before this, that sort of clocking required serious cooling, at the very least custom watercooling or more likely LN2. So if it'll run @ 2.1GHz on air with the fan at maximum, what can we expect it to run under water? The benchmark kings must be creaming their jeans right now. They are shining their LN2 pots right now just waiting to get their hands on a 1080. How long before we start seeing Fire Strike Ultra scores showing up at the top of the benchmark charts with GPU clocks north of 3GHz?!?

I for one am not disappointed. This thing is a beast. For those wanting more, just wait for the 1080Ti. I'd tell you to wait for Polaris 10, but what would be the point? They've already said these cards are going to be mid level cards with a reduced power profile. Not much to get excited about. Also, if rumors are to be believed, we may not see Polaris until October now. According to the rumors Polaris failed validation, so it looks like it might need another silicon spin.

I'm not trying to beat up on AMD, I root for all the new hardware. I really liked the 390/X series, the Fury series was also decent when it released. But now they aren't the new shiny anymore. Obviously their prices are going to need to come down, especially when the $600 1080's appear in a month and half. Timing isn't great for AMD, they didn't get nearly long enough after the release of the Fury series being on top. They really needed another 6 months. Unfortunately for them, their release schedule for new high end products are spaced out too far apart. It always seems that when AMD releases the new top dog, nVidia is right around the corner with something to slap it back down. Then nVidia releases something even faster before AMD has even had a chance to respond to the previous release.

May 18, 2016 | 08:31 PM - Posted by Anonymous (not verified)

TL;DR

May 18, 2016 | 08:30 PM - Posted by Anonymous (not verified)

The asynchronous compute still sounds like it will be mostly unusable. Even if it can preempt at the instruction level, it sounds like it still needs an expensive context switch. A100 microseconds seems like a really slow context switch comparatively speaking.

It would be great to get more in depth analysis of this without spin. It seemed to me that multi-GPU was going to be implemented partially with asynchronous compute going forward. With the asynchronous compute that AMD has implemented for years, compute can be handled by any available compute resource, regardless of location. They could even run on an integrated GPU to supplement the main GPU.

The current techniques are probably a bit of a dead end. AFR and split screen type rendering just doesn't scale well. Splitting up the workload with finer granularity will be required to make good use of multiple GPUs. Nvidia is holding back the market in this case. If developers want to support multi-GPU on the large installed Nvidia base, then they will not be able to use asynchronous compute to achieve it. Hopefully the market can move forward with asynchronous compute independently of Nvidia due to the installed base in all of the consoles. It will be worthwhile to implement asynchronous compute features for the consoles, so PC ports for AMD cards can hopefully make use of that.

May 18, 2016 | 09:21 PM - Posted by Anonymous (not verified)

The multi-projection stuff seems interesting, but it also seems like something that can probably be done quite efficiently in software. It would be good if you can get info from game developers on how this will compare to software solutions. I tend to think that VR will be best served by multi-GPU set-ups, as long as the software support is there. Nvidia seems to have gone the big GPU route, so it is not in their best interest to support multi-GPU set-ups; this opens them to more competition from makers of smaller GPUs. This may not only be AMD going forward, especially in mobile.

May 18, 2016 | 11:17 PM - Posted by djotter

Really comprehensive analysis here team, thanks for putting it together! Videos with Tom are quite informative too. It will be hard to justify the price for me personally, but image how great it would fold! 1070 might be more for me ;)

May 19, 2016 | 01:30 AM - Posted by Anonymous (not verified)

1. HDR - what kind ? Dolby Vision ? Standard Dynamic Range (SDR) ?
or HDR10 ?

the HDR have in the card 1000 nits or 4000 nits ?
Unregistered As usual what supports the card's hardware or software ?

2. in the picture
http://www.guru3d.com/index.php?ct=a...=file&id=21784

Write Contras over 10,000 :1 with nvidia 1080
In other words previous video cards we received the Contras 2000:1 ???

3. What color system supports nvidia 1080 video card ?
Rec.2020 color gamut or rec P3 ? or only rec 709 ?

http://www.guru3d.com/articles_pages..._review,2.html

If anyone has an answer that will bring links
It is not question of logic

May 19, 2016 | 11:22 AM - Posted by Allyn Malventano

Any GPU can support any color system - the catch is having enough additional bit depth available to prevent obvious banding when operating in that expanded system. Support for the color system itself really boils down to the display capability and calibration.

May 19, 2016 | 03:31 PM - Posted by Anonymous (not verified)

It is interesting to note that Ashes disables Async Compute if it detects an nVidia card - including Pascal, so we still don't have an accurate representation of how nVidia really does in that benchmark/game.

May 19, 2016 | 05:37 PM - Posted by Anonymous (not verified)

I'm upgrading from my 970 to this beast of a card. Simply brilliant for new game's.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.