The GeForce GTX 1080 8GB Founders Edition Review - GP104 Brings Pascal to Gamers
SLI Changes and FastSync
In addition to the changes in asynchronous compute, I think perhaps the changes to SLI will result in the biggest discussion in the community. Let’s first start with the technical changes, first announced at the GTX 1080 launch last week.
When using two GTX graphics cards in previous generations, you only need to connect a single SLI connector on each of the cards with a connector, ribbon or PCB (fixed). We always knew that the connection between the two GPUs in that case was bandwidth limited but to what degree was never publicly known. With Pascal and the GTX 1080, NVIDIA is introducing a new SLI bridge called SLI HB, for high bandwidth. These bridges connect to both of the connectors on top of the cards, linking them together to improve bandwidth between the GPUs.
The frequency on the new SLI connection on the GTX 1080 is 650 MHz, up from the 400 MHz in previous GPUs.
This can get confusing, so let’s define types of SLI bridges. First you have the “standard” bridge, one that is built like a ribbon cable, or even an older PCB-based one without lighting. Second you have the “LED bridges” that were released in the last couple of years from both EVGA and NVIDIA; if you have an SLI bridge that lights up when connected, that’s what you have. Finally, the new “HB bridge” is built specifically for the GTX 1080 (and we assume other Pascal GPUs).
The original SLI bridges that you might have several of from motherboards over the years only are recommended for single display configurations of up to 2560x144 @ 60 Hz. If you have one of the LED bridges you can properly integrate high refresh rate 2560x1440 displays as well as 4K monitors. If you want to push into 5K or Surround gaming though, NVIDIA will recommend one of the new high bandwidth SLI bridges.
NVIDIA showed a test result comparing the new SLI HB Bridge against an older model when running in 4K Surround on Shadow of Mordor. While the older bridge couldn’t keep up with the transfer triple 4K images, the new higher bandwidth bridge was, resulting in a smoother game play experience with no stutters. It should be noted that without the HB bridge in place, the frames from the second GPU are being sent over PCI Express, which is clearly higher latency than the SLI connection.
With GTX 1080 now recommending that you use BOTH SLI connectors on the card for 2-Way SLI configurations, what happens to 3- and 4-card SLI? By default, NVIDA will only be supporting two GPUs in SLI and 3- and 4-Way SLI configurations “are no longer recommended.” Why?
As games have evolved, it is becoming increasingly difficult for these SLI modes to provide beneficial performance scaling for end users. For instance, many games become bottlenecked by the CPU when running 3-Way and 4-Way SLI, and games are increasingly using techniques that make it very difficult to extract frame-to-frame parallelism.
But there is a catch! Even though it’s not recommend, NVIDIA will still allow for 3-Way and 4-Way configurations with GTX 1080 through the use of something called an “Enthusiast Key.”
For this class of user we have developed an Enthusiast Key that can be downloaded off of NVIDIA’s website and loaded into an individual’s GPU. This process involves:
1.Run an app locally to generate a signature for your GPU
2.Request an Enthusiast Key from an upcoming NVIDIA Enthusiast Key website
3.Download your key
4.Install your key to unlock the 3 and 4-way function
Oooookkkkaaayyy….? I’m torn on this one. I can absolutely understand the position NVIDIA is in. If SLI users represent maybe 2-4% of GeForce users, then 3-4 GPU users have to be infinitesimally small minority, making the added cost of testing, validation and driver optimization for that group a painful position to defend to accounting. But if you are going to allow some users to enable it (and system builders) then why put this artificial, but easily bypassed, lock and key system in the way? To me it appears to be a way to deemphasize multi-GPU configurations above two is all.
NVIDIA is not building HB SLI bridges for 3- and 4-Way configurations so you’ll have to use legacy bridges with your GTX 1080s. While NVIDIA tells us that they are still putting effort into 3+ card multi-GPU configurations, and educating developers about the best practices to integrate it, they admit that “game designs are becoming less friendly over time for more than 2 GPUs.” Because of this, they are “appropriate expectations and steer most users towards 2-way systems.”
FastSync splits render and display pipelines
I sure hope you weren’t tried of technologies that end with “sync” because NVIDIA is releasing another one with the GTX 1080. FastSync is an alternative to Vsync On and Vsync Off states, but is not variable like G-Sync or FreeSync. The idea is straight forward: decouple the render pipe from the display pipe completely and there a lot of interesting things you can do. FastSync tells the game engine that Vsync is OFF, allowing it to render frames as fast as possible. The monitor is then sent frames at its maximum refresh rate but only completely rendered frames, avoiding the tearing artifacts usually associated with Vsync Off states.
FastSync creates a virtual buffer system that includes three locations. Front buffer, back buffer and the last rendered buffer. The front buffer is the one that is scanned out to the monitor at the same speed as the display refresh rate. The back buffer is the one that is being rendered to by the GPU and cannot be scanned out until it’s complete. The last rendered buffer will hold all new frames just completed in the back buffer, essentially saving a copy of the most recently rendered frame by the game. When the front buffer is finished scanning to the display, then the last rendered buffer would be copied to the front buffer and scanned out.
Interestingly, because buffer copies would take time and add latency, the buffers are just dynamically renamed. In high frame rate games the LRB and BB would switch positions concurrently at the render rate of the application, and when the FB had completed its most recent scan out, the current LRB would be renamed to the FB, immediately starting its scan out.
The usage model for FastSync is games that are running at very high frame rates (competitive gaming) and thus have to decide between the high input latency of Vsync On or the screen tearing of Vsync off. For CS:Go gamers that are used to hitting 200 FPS, you’ll be able to play the game tear-free with only a very slight increase in latency, about 8ms according to NVIDIA.
This is definitely something that should only be enabled in the NVIDIA Control Panel for those games that are running at frame rates well above the maximum refresh rate of your display. FastSync will be its very nature introduce some variability to the smoothness of your game, as it is “dropping” frames on purpose to keep you time with the refresh of your monitor while also not putting backpressure on the game engine to keep timings lined up. At high frame rates, this variability issue isn’t really an issue as the frame times are so high (200 FPS = 5ms) that +/- 5ms is the maximum frame to frame variance you would see. At lower frame rates, say 45 FPS, you could get as much as a 22ms variance.
FastSync is a cool new feature to improve the experience of FAST games, but don’t think NVIDIA has found a free alternative to variable refresh rate technology.
NVIDIA did state that FastSync was coming to Maxwell as well, and possibly even Kepler graphics cards.
|
PC Perspective Upcoming Events
Get notified when we go live! PC Perspective Live! Google Calendar RSS |







I'm seriously disappointed the new generation isn't going to usher in a new era of 60fps 4K gaming with max details. It's half-again faster than the 980Ti, so it's a huge step up, but it didn't quite reach my expectations.
The reality is that devloper will spend a certain amount of time optimising the game. And if the game runs reasonably well on most hardware and figuration (1080p/1440p) then they're going to stop. As better GPU comes out, so too newer and more demanding games.
I mean, if you take the 1080 and run Battlefield 4 at 4k you can hit 60fps easily now.
Yeah... I was expecting... more...
Too much apparently :/
Lets see what $1,000 will buy us in a few months then.
Meh. Some people always seem to find a reason to complain i guess.
Most new models have around a 15%-20% performance increase compared to the previous models for many years.
This chip and card seems to increase 3x the standard increase at 45-60% faster. This is simply by looking at base clock speeds.
This is not even bemchmarked. The new GPU and memory are slapped onto each other and both are new tech. The end result might even be better than average +50%.
This is without overclocking and product optimizing.
LOl oh please where do you get your numbers from?
I'm not impressed with it either. The NVidia presentation had the OC at 2100 running 67c, what they didn't show was the fan at 100%, probably in a refrigerator. Thermal throttling kicks in at 82c.
For 980 sli vs. 1080 only a few cases was the 1080 faster, and maybe 1-2% at that.
For those already owning a 980 or 980ti, it is an upgrade yes, but I'm not sure it's $700+.
Thermal Throttling kicks in at 92C not 82C
YOU ARE TOTALLY WRONG THERMAL THROTTLING KICKS IN AT 83c not 92c
Yes and No.
You can set the GPU limit on different cards. Not sure about the Founders but I saw an MSI card at 92degC.
The Founders preset was about 82degC.
RIP 980TI and Maxwell ! You served well.
The partners' custom cooled and factory overclocked cards are the way to go. I'm excited to see what insane speeds they can achieve. With a little driver maturity and custom cards is 25% faster than the current reference possible? Could be.
I'll hold on until in-the-wild testing can confirm if theoretical VR performance increases can translate into ACTUAL VR performance increases (as VR SLI and Multi-Res Shading did not), but I might be moving up from the 980Ti sooner than I was intending.
Nice to see updated Nvidia BenchWorx in action. ;)
So.......nvidia....where is the promised async compute driver for GTX 900 series after this?
Exactly where's the DX12 support for Fermi GPU's. Nowhere and later cancelled :P
Oh, it's all available in this shiny new box, and for only $699! Also, you get a really nice hardware dongle with it, and can use your old card for Physx or a as paper weight! :P
ROTFL...
Where is the games that you need DX12 for? LOL
:p
Silly rabbit, I only play AotS and refuse to play any other game.
Must be a "red shirt" Star Trek slash AMD joke in there somewhere.
Idiot they cant and wont come they cant do Asyn its a flawed card design and Crapvidia announced you all got screwed in the butt hole by them hope you enjoy it
The can do asynchronous computations, but they can't do ASync Compute.
NVidia's technology needs to be optimized for. NOBODY has any useful tests yet so it's pointless to scream about the situation.
I mean, did you even read the article?
Not bad but not extraordinary also.
It would have been good to show some overclocked results in gaming. It reaches 2Ghz but what of those speeds translates into real performance? Or did I miss something?
Any coil whine, Ryan?
Add one more --> Any 'unannounced' memory segmentation (ROP amount, L2 Cache size too) features?
No, no, no! That's why this is "Founders Edition". You'll all have to find this for yourself. That's why they reduced the price...oh, wait...
None to speak of aside from the typical 500FPS loading screen sound.
SLI doesn't work in dx12.. sooo.. maybe take out the hitman and tombraider 980 sli.
in the dx12 test the 980 sli is pretty much just speed of a single 980.
It does prove the point of the limitations of DX12 and UWP gaming, so I would say leave it in. After all, these are representative of what folks are playing.
Another idiot .. it has nothing to do with DX12 ... DX12 made sli and cross for stone aged you can use both cards on series utilizing all of the memory in both cards but then again the DX12 limitations are only subject to Crapvidia AMD runs fine and at a performance boost way above Crap vidia were Crapvidia gives you you OMG WOW 10% improvement ... WOW i want run and get a new paperweight
so , faster than 980 sli turn out to be total lie.
160 W tdp on the slive is 180 ....
9 TFlops are actually 8.2 Tflops
And then when the other company do paper lunch all go nusts and hate them here we praise lies and misunderstanding between departments as excuse.
At least Maxuell (excluding 970 fiasco ) was an engineering feet in terms of clock speed and power consumption.
Nothing to see here ( only the software stuff and how they will perform)
I was hoping to get a new card , but I guess I have to wait for next card ( don't matter the camp I want better perf/$ than this ).
Well...it's always like that. You use the metrics that most suit your point. We should return to the CEO Math (Pascal will be 10x Maxwell) and see how that hype turned out today :P
But I agree with you.
It as fast as 980 SLI when OC to 2ghz
Sure, OC the GTX 1080 but you don't OC the 980. When overclocked 980 and 980Ti...this won't look half as impressive but it's interesting how reviews are made these days...
It as fast as 980 SLI when OC to 2ghz
Did you note the boost clocks all the cards reached during game testing? I'm wondering if the 1080 would be much of a jump over a 15/20% factory-o/ced 980 Ti.
The 1080 is 25% faster without even overclocking.
Overclocked it's over 40% faster.
Overclock the 980ti and the 1080 is still at least 25% faster...
i wish to find more blender cycles benchmark.....
T^T
Great minds thing alike :)
Definitely want to see some compute test, preferably with Blender Cycles Cuda test, like BMW test ...
OpenCL too, no vendor lock-in please!
I would have to say that these days, the BMW test is one of the least useful performance test blend files you can possibly run. It uses only Diffuse, Glossy, Glass and Emission shaders - which run fast on every card I've ever owned. Try putting it through its paces on the hair simulation scenes in the Blender Foundation's 2016 benchmarks, and that's where you'll see the card start to show its true colors. The Koro(lamaa) and Fishy Cat scenes will be the proof in the pudding for whether it's another lemon at release that will take 10 MONTHS OF DRIVERS AND CYCLES UPDATES (I'M LOOKING AT YOU, 980 TI) to start performing well, or whether it handles things like hair and particles well. I'm also hoping that the next generation of cards starts to perform well with volumetrics - My 780 TI cards still offer the best performance per card on all complex shaders.
Blender Foundation's February 2016 Cycles Performance Test:
https://code.blender.org/2016/02/new-cycles-benchmark/
Blender cycles benchmarks for Windows and Linux, and whetever Vulkan API stuff that the Blender Foundation may be using in the future!
And More, much More, Linux(Steam OS, others)/Vulkan gaming, and Blender, graphics benchmarks.
Vulkan will it Blender?
Wonder when you plan to test computing capability. I so want to see Blender in Cuda, just to see how it compares to other Nvidia cards and if we truly will see its improved FLOPS.
This is depressing, nividia pretty much did a finishing move, mortal combat style, on AMD today :(
AMD GPU market is going to shrink even more now that nvidia is firing on all cylinders...
AMD will have to drop prices by 20 to 40% or sales will plummet to nothing. Because it seem the GTX 1070 will easily beat the Furyx and will be priced $200 cheaper !
So AMD will need to drop the price of the FuryX by $300 (half the ram, more power consumption, and slower)
Makes you wonder if the Fury line can even be profitable at this stage and AMD might have to cancel production. Oh my.. this is bad.
This mean AMD cant sell any Fury or R9 without loosing money,
and polaris will be relegated to the new mid/low end class.
Where margins are thin.
Get a Nvidia card for the advertised pricing and then say - Amd has to lower this and that.
We have no info on the real availability of the new 1080, to say it will affect pricing of any card, be it AMD or Nvidia. Could very well be priced from 750 to 800 american dollars for a few months, and that price won't compete with any card on the market.
A consumer will wait if he/she knows the 1080 got an MSRP of $599 and only see card that are half as fast and cost $650
Pay $150 more for a slower card with half a the ram ?
But yes, its possible that people value the 1080 much more then $600
Nvidia will sell out of the $800 model because compared to a $650 fiji its a super deal. (FPS per $)
I'm sorry, but I dont see AMD getting out of the hole with its upcoming lineup. And nvidia will drive AMD prices so low, that its margin are going to be so terrible...
I think AMD didn't need a finishing blow, they did plenty of damage to themselves by the whole Fury debacle (in hind-sight) and by simultaneously raising prices through their 8GB Hawaii rebrands.
I guess they're now shooting to be the king of 60W GPUs, so a hearty *shrug* to them.
AMD target polaris parts for the low end, yes.
Large 14nm chips with low clock will compete, but the margin is going to be horrible for AMD.
And I agree AMD pricing is ridiculous... At retail AMD want you to pay more for a slower and hotter card. This is why nvidia is turning billion in profit and AMD GPU division is turning deep red.
The r9-390x was a HUGE mistake... AMD only had to tweak the 290x.. by NOT overclocking the beast and dropping the price to $260
AMD also dont beleive that software optimization is profitable
AMD mentra "Slow, un-optimized drivers? .. we can fix that by making bigger GPUS"
$700 card isn't going to affect AMD much. And the volume is likely not going to be that big even if it would affect AMD. Not a finishing move. If nvidia won the sub $350 market, then it would be a finishing move. But it looks like the 1060 is a ways off and the 1070 might not have a clear win (and costs too much).
There is also the case of OEMs. AMD is poised to offer OEMs new affordable parts to market to their customers. Nvidia going after the high end means they aren't
$600 for the OEM 8GB models. 4GB Fiji is $650, and is much slower.
It is at most 1070 performance level.
The issue is that AMD can keep the Fiji alive in its lineup by lowering its price, because the price drop would need to go from $650 to about $280 to $350. Making the card non profitable.
In contrast, its possible to keep other 28nm card onthe market.
nvidia will continue to manufacture and sell 28nm in its lineup.
Because maxwell is very cheap to manufacture. nvidia can drop the GTX 980/970 by $100 and still make very good profits.
Another issue, TSMC 16nm seem to clock much MUCH higher then AMD 14nm process. This mean nvidia call sell a 1.6ghz part while AMD is stuck at 1.3. a 25% gain in clock + a better architecture give nvidia a 30 to 40% gain in stransistor performance.
AMD will need chips 30 to 40% bigger to compete, and in turn reduce their margin.
And problems dont stop here. nvidia already gave final cards to reviewer.. with AMD with dont even know the paper spec of their polaris 10.. let alone the card that will replace the Fiji line.
So I dont see AMD gaining any market share this year, and AMD GPU division as been loosing money so if things gets worse. Things are not looking food at all :(
What are you talking about? This is next generation, of which AMD will be releasing in less than a month. Your comparison would be like if you proclaimed the Xbox division dead when the PS was released because it was more powerful than the Xbox 360, when the XB1 was only a week away. Pascal is competing with Polaris, NOT Fiji, and if rumors are true then the first Polaris cards will release less than a week after the 1080 hits.
You believe that ? We dont even have the paper spec for the mid/low end polaris 10.
AMD has no date was announced for a card to compete with the 1070 or 1080.
The Vega is only 2017 at this time, and no spec.
With AMD its all rumor and speculation. nvidia actual gave production board to reviewer already.
If AMD has 6 month gap to compete with the 1080, AMD will have lost the high end market. And margin are going to SUCK for AMD in the low end because nidia can do so much more per transistor and 16nm TMSC seem to clock much higher...
I personally think AMD is entering its biggest challenge.
Sure looks that way. For obvious reasons, it's very bad news for everyone (except Nvidia) if AMD can't retain enough market share to move forward with r & d as a serious competitor. Either AMD starts pulling rabbits from hats, or we all lose out. People should bear that in mind when they switch to fan-boy mode.
Polaris will not be relegated to mid/low end class, because it was designed for it. Both Polaris 10 and 11 are smaller chips.
Agreed that AMD can now stop bother producing Fury line. Same for 980Ti and after intro of 1070 also for 980 and 390X. Polaris is probably going to do the same for 970/390 and all the below, stopping above Intel's integrated GPUs.
They have to stop production og Fiji, or it will be a financial disaster.
But the issue is, we dont even know the spec of polaris 10.
Yet the 1080 card where reviewed already and we have a hard release date.
Who will spend $650 on a slow 4GB fiji, when they can get a $600 GTX 1080 ?
And polaris 10 is expected to be half the speed of the 1080.
So AMD will have no card to compete with the 1070 and 1080 until 2017.
Ah its not as bas as it looks. Maybe at the moment, but you need to look further. This guy really seems to know how it all hangs together. Kind of like a masterplan for AMD.
https://www.youtube.com/watch?v=0ktLeS4Fwlw
AMD will make a comeback in the coming years, maybe not at the high-end at first, but their architecture controls almost all of the total gaming market through consoles. Just watch the video's ^^
But that advantage is irrelevant now.
Pascal is doing amazingly well in Dx12. console optimization will now benefit nvidia more then AMD.
AMD 14nm process also seem worse then TSMC 16nm.
And TSMC already got 10nm very far along, so nvidia might be able to jump node much earlier then AMD for the next gen.
nvidia is untouchable at the mid/high end (that include the HPC)
and can make smaller chip then AMD for equal performance at the low end.
AMD will have to really lower their margin to compete in the low end. This might not be enough to make AMD GPU group profitable again :(
I wish you could collapse troll threads here.
lol are you in for a surprise i just cant believe how many crapvidia fan boys are blind sighted .... you all deserve what you get more money than brains for a whole % fps less youll go spend more money to support a ling cheating piece of crap company the rapes you at every turn ... wow what trolls .... and idiots to boot ...well very entertaining forum but i want real facts not under the bridge trolls that don't know their ass from a hole in the ground ....
Good work guys! I enjoyed the video review/summary.
There is a part of the Async compute diagram that isn't making a lot of sense to me.
The entire point of async compute was to allow the simultaneous running of graphics and async shaders to get better utilisation or resources so the shaders were not sat idle, but the shaders aren't capable of doing the "graphics" work since its fixed pipeline work. In effect what the dynamic diagram ought to be showing is an additional compute workload happening after the first one, not the compute taking over graphics work that doesn't make any sense.
I don't think the issue with Maxwell's implementation was really clear and these slides aren't helping clear up the situation. Do you have more information about how this will work? What am I missing here?
No... That's not the point. That AMD tries to sell you, but that's not what asynch means.
It's been explained better at other places that async does not require parallel.
"Similarly for compute tasks, Pascal integrates thread level preemption. If you happen to be running CUDA code, Pascal can support preemption down the instruction level!"
So what they may be saying is that its improved, but that it's not fully hardware based, and that single instruction preemption needs CUDA code to be of any help for debugging at the single instruction level(AKA single stepping through code in debugging mode)! Most certainly Nvidia has improved on some thread level graphics/compute partally in hardware scheduling and that will result in better GPU hardware execution resources utilization than the previous Nvidia generations.
I do not like the sounds of that “happen to be running CUDA code” as that smacks of a vendor specific proprietary solution that forces others into the CUDA ecosystem in order to obtain the ability to look at things at the instruction level. How is this going to play out for Vulkan/other API debugging, as well as OpenCL, or other cross platform open code/graphics APIs/other code that may not be using CUDA.
There is going to have to be a serious comparison and contrast of the in hardware async-compute features of both Polaris/Vega, and Pascal/Volta and it cannot wait for the Hot Chips Symposium white papers and other professional trade events.
Any GPU processor thread scheduling/dispatch done in software is just not going to be as responsive to any sudden asynchronous events that might occur at the hardware/instruction level as that which is done fully in the hardware buy specialized in hardware GPU processing by a hardware based thread/instruction scheduler/dispatch and context switching unit. No amount of trying to hide latencies for asynchronous events in software can result in as efficient and as rapid of as response to an asynchronous GPU processing thread event as that which in fully implemented in GPU's/any processor's hardware! Without the fully in hardware asynchronous compute processor thread scheduling/dispatch and context switching there will be idle execution resources, even with work backed up in the processor’s thread scheduler queues! Most software based scheduling, for lack of fully in hardware based units, has an intrinsic deficiency in the software's ability to respond at the sub-single instruction level to any changing event in a GPU processing units execution pipelines(FP, INT, and others) like having the fully in hardware async-compute units does.
Read up on Intel's version of SMT(HyperThreading) to see how async compute is done fully in hardware, and async compute done fully in a GPUs processor thread dispatch/scheduling/context switching units has a large advantage over any software, or partially in software processor dispatch/scheduling/context switching for asynchronous compute. The fully in hardware based asynchronous compute has the fastest response to any asynchronous events, and the best processor execution resources utilization possible!
P.S. true hardware based asynchronous compute is fully transparent to any software(except the ring 0 level of the OS Kernel, Mostly for paging/page fault events and other preemptive multitasking OS context switching/hardware interrupt handling events) and is fully implemented in the processors hardware for CPU/GPU hardware processor thread scheduling/dispatch/context switching!
For a discrete GPUs the OS is in the card's firmware(mostly) and GPU drivers, and runs under control of the system's main OS/driver/OS Driver API(WDDM for windows, Kernel drivers for Linux) software stack.
A lausy performance per $, given that the GPU die is ~314mm2 vs the earlier 398mm2, must be comparable production costs, and even lover overall GPU manufacture costs compared to 980, given the memory prices have lowered for a while now.
And those minimum framerates ar also wery far from the avarage, really unbalanced card, or maybe just the fault of the drivers? But even so - dissapointing.
Which is neither price nor performance, so how can it affect price/performance?
Given the pascal clock for clock is the same as maxwell and price will be well over the advertised 700 usd - lausy performance per dollar.
Better?
Good luck running Maxwell at >2GHz.
This is how every industry die shrink happens. Up-front development cost must be recouped somehow. Same happens with Intel.
Are there real price drops for the 970, 980, 980Ti coming? I don't see how those cards can stay at their same prices. If the prices do fall what happens to the performance per dollar of the 1080?
I don't think you'll see formal price cuts on the earlier models. Rather, they'll simply cease production and sell through whatever inventory still remaining in the channel via clearance pricing.
I can't believe how Nvidia distorts the truth. They're lucky their products are pretty good (just not quite as good as they pretend they are), or there would be a big consumer backlash.
What distortion have you seen here that is not in line with the typical PR preso spin applied from either camp / any product launch?
Your probably right that the marketing spin on things is expected at this point. But I still think it's slimy. Like holding up a $699(founders) GPU and claiming the price is $599. Until we see otherwise it's a $699 USD GPU to me (the founder's premium is even worse in the UK)
I suspect the founders premium is due to constrained initial supplies, so good luck getting one, even if you are willing to pay the price premium.
How much faster is the 1080 vs a non reference card like a G1 Gaming,Asus ROG Matrix GTX 980Ti,etc. You only benchmarked it with the 980ti reference.
It's looking like the overclock headroom is similar between the different models, so a non-reference 1080 would scale against the other non-reference cards proportionally, meaning our reference to reference comparisons will be similar to hypothetical non-reference to non-reference comparisons.
"capitalism states"
More like the laws of Supply and Demand, and supply and demand works across many economies where there are no price controls in effect, Mercantile like trade restrictions, or centralized communist/other unreasonable market controls. its much more like the free market states(with resonable restrictions to support fair and open markets) and the laws of supply and demand. Crony Capitalism can and does run counter to the intrests of the free market.
See: The Standard Oil Trust, Intel vs AMD, M$/M$'s OS market share vs the third party PC/Laptop OEM market, Intel too, with the third party PC/Laptop OEMs. As prime exaplles of Crony Capitalism at its worst!
Ryan - How about doing DX12 benchmarks - AMD PRO DUO vs GTX 1080. That is AMD's Competition vs Gtx 1080 for next 7 months. Do this for us - your fans. DX12 only Please :)
Your Fan
I hate to disappoint, but since there is currently no multi-GPU support in DX12, your PRO DUO vs 1080 comparison would end up looking like a Nano vs 1080, which would look extremely bad for AMD.
You said there was no support in DX12 yet you assume AMD will do worse ???? Love the bias against AMD Allen/Allyn, keep it up. And it comes as no surprise that bias continues on in the reviews at PCper as per usual !
Don't be dumb, he said DX12 doesn't currently support
multi
GPU.
Since the PRO DUO == Dual Nano in Crossfire you will only get the performance of one of them.
The 1080 is a single GPU card so its performance doesn't take the same hit.
Oh be quiet.
Oh be quiet.
pro duo is not its competition. I mean for those with the cash a pro duo might seem attractive but its more a professional card for certain things.
Wouldnt even be a fair comparison in a game that support crossfire well, pro duo would win.
I don't think there is competition from AMD for the 1080 right now. Vega will obliterate it but the only value in that right now is to make people think all your GPUs are better. Halo product effect.
Yes for multi-GPU enabled titles, Pro Duo would win, but for its cost you could SLI a pair of 1080's, which would in turn beat the Pro Duo.
No the PRO DUO has been repurposed by AMD as a development SKU with a very nice option of getting to use the Pro Graphics drivers with the PRO DUO and not having to pay twice as much, or even more, for a FirePro full professional SKU. And RTG will be using some Vega SKUs to compete with the 1080/others! Polaris is RTG's for mainstream branding, to maybe get AMD's Polaris GPU SKUs into some laptops, and there are probably only 2 Polaris M400 series offerings that are not re-brands, so maybe only 1 mainstream Polaris mobile SKU, and one high end laptop Polaris based variant, and some other Polaris mainstream desktop variants.
You will have to wait at least a year before there is enough DX12/Vulkan titles, and enough DX12/Vulkan tweaking done on games that are already DX12/Vulkan enabled for all the necessary tweaking of gaming engines/game titles for both DX12 and Vulkan. Top that off with the need for all the benchmarking software to catch up to the new DX12/Vulkan APIs, that are themselves still undergoing tweaking and updates for the latest GPU hardware!
All you need to do is to suspend you belief of any website's declaring a winner for the next year! So don't believe any Hype for a year, and you Know that any Marketing is just the Hype from a “profession” that traces its roots back to the snake oil salesman, and much further back to when the first lie was told!
Watch out for those lies of omission, there are plenty out there who will be providing only some very cherry picked metrics to spin things positive by omitting important facts that other results will show, so read as many reviews as you can but wait a year to see the truth!
allyn, what's the reason behind u guys use driver 348.13??
It was the newest we were provided at the time of testing. OC was done with a slightly newer build.
ryan, got any benchmark for encode/decode h.265 like handbrake etc? thanks..
Handbrake probably has to be updated to support this first, no?
yeah true also.
That is one powerful card, just wished it cost $399. The only problem with this card is that is still cannot run all games and 4k@60 fps, so if that is what you are looking for you will have to wait for the big chip. That being said if you intend to run sub 3440x1440p monitor for the next few years this card will hold you over for sometime.
I won't lie I'm a bit let down. A ~25% performance increase at the same price point isn't bad but i was expecting more for a double node shrink. The efficiency increase is great but not really what matters to me for desktop gaming.
The issue is really the price. Its a small die and so a 980 class GPU (980 was 394mm^2 and this is just 314mm^2) and hence its performance improvement is pretty good increases at the die size. But the issue is its priced like its a 600mm^2 die like the 980 ti and vastly higher than a 980 was on release.
Its price is what is disappointing really, its performance is about right for a node shrink, but then I think a lot of us were kind of expecting 20nm + FinFET (16nm as they call it) to be 2 node jumps in one and hence potentially give us 4x performance, but its clearly not that good.
Eh, 16FFP is more than just 20nm w/FinFET. Sure, the back end metal layers are using the 20 nm process, but litho and etch are much, much different.
Over 300 mm2 is a big die for 16 or 14 nm. Don't expect 600 mm2 die size range in the consumer market without truly ridiculous prices. Yields for the same die size will not be comparable to 28 nm parts. Nvidia's 600 mm2 part is in the $15,000 dollar range, AFAIK.
The performance definitely does not match the hype (especially the claims saying it beats 980 SLI) but still an amazing card because of the competitive price point.
Still the fact that they are trying to bring proprietary API features into VR will keep me from supporting them as a company. Hopefully, the competition will be able to step up their game also.
still not the second coming of the christ, but, yeow, that 2+ Ghz is godly.
why no ashes of the singularity?
edit: AoTS isn't included because the 1080 is not putting in work there, yikes.
You are saying Fast Sync should only be used when frames are high. So what is considered a baseline for when you should use it?
If you are *just* above the panels max, the issue is that slight judder is introduced, which subsides as you get to 2x-3x max. That said, the judder will probably be imperceptible at panels with very high max frame rates (144). The benefit is that it comes closer to v-sync on tear fire with less latency added when >max panel rate.
"Either NVIDIA was over-designing for the Founders Edition or another GPU that is more power hungry may be coming down the line."
Ooooor, they intend to use the same PCB for a datacentre-dedicated card intended for rackmount chassis, where end-mounted power connectors are required to fit inside a 3U chassis. Comp[are the Titan X and Quadro M6000 PCBs, for example: identical, other than placement of the power connector.
Fast-Sync is just what OpenGL with Triple Buffering is doing. Good for NVidia if they got it working under Direct3D, but this is nothing new they invented, is has been around for years.
Fast sync != triple buffering, unless somehow OpenGL does it differently than anyone else???
I can't wait to see the non reference cards.
Be interesting to see if its asked if nvidia thought it was slightly disingenuous to show a card (presumably) fully loaded on stage running at 2.1ghz and 67c only for the retail products to be no different to the last few cards with an 80c+ operating temperature.
They had fans at 100% speed, which anyone is free to do with their own card and get similar results (we saw similar with our card here).
That's a dumb excuse. Nvidia didn't mention that in the press conference.
In which case I can show case any card ever made crank the fan to 100% and have PCPerspective amazed at how low temp are and then come review time skip it entirely.
It's a pretty reasonable assumption that any hardware company is doing a live demo of their product is going to crank up the configurable options to the max for that demo. It would have been deceptive if there were some NV-only options that were not consumer facing in the retail product. That's not what happened there. What they showed is achievable, so it's fair. AMD has demoed products in windowless cases that press were not allowed to look inside, so at least we saw the products at this one.
https://www.youtube.com/watch?v=wlSeHCPd75s
i personally don't believe that the fans is crank up to 100%
I didn't have time to watch the video yet. Do they say if its made by Foxconn like the Best Buy exclusive Nvidia branded cards were?
If they are im definitely waiting for the custom 1080 from Gigabyte. I refuse to give Foxconn any of my money if possible.
now i'm even more less upset that i had to buy a 970gtx to get my pc gaming on. Yes it was pricey, but it runs the console ports just fine for now.
looks like 4k is still 2 years away
3.5 memory doesn't look like the deal breaker it once was
E3 is next up in June... lets see if console games are going to up the ante
NVidia benchworks. As if Pro Duo "review" wasnt enough.
I don't care for this new card as it won't do solid 60fps at 3440x1440, but I really like your T-shirt in the video!
:-)
I've got 980ti overclocked with ultra-wide display (2560x1080), and I'm waiting for a decent monitor that will support G-Sync. 4K never worked for me as human cone of vision covers too small area when sitting 1m in front of the screen.
I will be waiting for HiDPI 3440x1440 screen with G-Sync, only then I'll upgrade.
All I'm seeing is what a good deal 2x gtx980 sli or gtx1070sli might be in the very near future!
Did Ryan schedule another vacation ?
6 GameWorks games
1 Gamer Evolve
You kind of hope PCPerspective would be more professional after Allyn had to go on the last podcast and say "Ryan didn't have enough time to get to the other games" and pretend he had a list in front of him with a balance number of games to be tested.
The game choices look to be consistent / proportional to the popular titles out right now. We have DX12 / AMD results included here.
So many butthurt AMD fanboys here. Pathetic...
980ti amp extreme owner plus 750 ti ftw, even if think this should cost less, about $100 less, I mean even the 980 launched at $549 and it too was better than the Titan black and better than w 680s in slightly so stop trolling!
680s in sli*
I don't see much of that here other than a few trolls. This review, while interesting, isn't useful yet. Only an Nvidia fanboy would purchase one of these cards without waiting to see what the real competition looks like.
Temps left out for a reason?
What kind of review leaves out Temps?
Appeasing ones.
The 1080 is the same. it reaches 83C and it starts to down clock lower then Boost clocks. Remember the attention Ryan gave the 290X reference for variance clocks.
I would hope Ryan isn't a hypocrite and writes as many articles as he did back then so covering the issue so Nvidia can address it.
I don't follow here, was there a significant issue with 1080 temps? We did power testing rivaling the other reviews out there. Given that, and how little the 1080 draws given its performance, I can see how Ryan didn't need to focus as much on temps.
For a comprehensive review, Ryan ought to have included it. Whether it's deemed necessary is up to the viewer/readership.
Anonymous post or not its a fair comment.
A handful of german site are reporting gaming over a period of time longer then a quick benchmark the clocks are settling in below boost clocks and on several main titles the clock is at base level. Very few exceptions where the clock exceeds boost clock level after gaming for a handful of minutes. Tom's Hardware shows after a 8min test in metro last light it settles in between the base and boost on a test best. When ran in furmarks it goes below base clock
http://media.bestofmicro.com/W/3/581763/gallery/01-Clock-Rate_w_600.png
Ryan needs to keep an eye out on the $599 cards if they cutting back and aren't performing as well in games not just on quick benchmark runs.
The Nvidia boost clock abilities (I forget what it is called) make for misleading comparisons. AFAICT, the current 28 nm parts often boost clocks up higher that the specified stock boost clock. The stock GFLOPS ratings are not what you get, so Nvidia isn't magically somehow that much more efficient. The only time most people have a problem with clock throttling is if it throttles the clock below the specified base clock. It is generally expected the it should have sufficient cooling to never drop below the specified base clock. I don't really like these policies, since when overclocking is essentially the default, you don't really know what you are buying. It would be difficult to sell 5 different version of the same card at different clock speeds though.
Stop drinking the nvidia Kool-aid Ryan, consistently comparing the 1080 to the 980 like nvidia wants you to. You should be comparing it to the 980 TI if you want to give a true impartial review. 980 TI had the same launch price, and is consider to be the card to beat.
980Ti launched at a higher MSRP. Also, I included 980Ti in VR results. 1080 is marketed as their mid-level GPU (as in there will be a 1080Ti), so 1080 to 980 is fair...
*edit* wait a sec, Ryan *did* test 980Ti. What point were you trying to make exactly again?
Unfortunately Nvidia hasn't given a date to the $599 1080. What we have is the $699 Founders Edition.
As of now the 1080 will launch higher then the 980 TI.
Will Nvidia extend the time for founders edition sales and push back non founders cards.........you bet ! Watch for customers getting pissed off once again !
Nice higher end enthusiast card. Can't wait for the 1080ti and Titan variants that will eventually follow.
When the prices drop substantially, a second 980Ti will be my choice.
Rather that than ditch the current card..
Pity there wasn't SLI results for 980Ti SLI shown too. Yeah, no SLI for DX12, but like a lot of people I'm still on Win7.
Good review, thanks!
Holy crap this review is good. New DX12 tool, VR test and power measuring. The amount of data crammed in here was fantastic. Great work Ryan, Allyn, et al.
The GPU ain't bad neither.
Thanks, it is nice to see someone appreciate how much work it takes to put those together.
Thanks! It was a lot of work for sure!
Kind of disappointing for a new architecture on a double node shrink. Also worrying pricing: if this midrange card is $700, are we going to see $1000 high end (1080ti)? $1500 titan?
The 980ti was released at a lower price and not that much lower performance. One would expect two node shrinks and a $50 increase to provide more than 20% or so additional performance. Even if this is a $600 card, that's still not much performance considering how long we've been waiting for those shrinks - and this is not a $600 card. The product that Ryan received and reviewed costs $700. Sure, you will be able to get similar cards for less in the future, but that's the case for all graphics cards, and you can't judge a product based on future price drops.
The tech (and especially frequency) is pretty nice, but this is a midrange card released for above high-end price.
Currently the fastest single GPU gaming card = midrange? Lol.
If you don't like the price don't buy it. There's plenty of opportunity for competition to come in and offer a better product at a lower price. Until that happens Nvidia will extract as much profit as possible.
I suspect supply is going to be constrained, which is probably most of the reason for the higher Founder's edition price. If the Ti version is based on the 600 mm2 part, then I would expect prices to be ridiculously high. Yields on 16 nm will not be anywhere near as high as it is for 28 nm parts. They probably do have a huge number of defective parts though and these may be functional enough to be sold as a 980 Ti of some flavor eventually. Perhaps they will have multiple configurations of such salvage parts. It will have to wait for HBM2 production to reduce memory prices before they can make a consumer grade part.
Considering the tone of the reviews, it was probably worth it to do an early launch, even if it ends up being a bit of a paper launch, to get all of the favorable publicity. This way, Nvidia's cards get compared to old 28 nm parts rather than AMD's 14 nm parts. It is obvious that the 16 nm part should have significantly better power efficiency than 28 nm parts. Comparisons with 28 nm parts are mostly irrelevant. We don't how this compares until we have AMD parts for comparison.
Can you add another DX12 title please ashes of the singularity because we only have one real DX12 title Hitman,until we have another game to try out
(gear of war was just a bad port... so I guess it doesn't really count)
Because it looks like the ASYNC is closing the gap in fps.
I know Nvidia is using a different type of "async" let's say on the software side of things.
It doesn't really matter how they do it, what will at the end is how good games will run.
Will PCPer conduct GPGPU testing for both CUDA and OpenCL performance?
I understand the 1080 is marketed toward gaming, but would be great for video processing as well!
Wow, a lot of whining go on here in the comments. You know if Ryan didn't benchmark the game you are interested in seeing, PC Per isn't the only review site. There are 1080 reviews on all the major sites: Guru3D, HardOCP, Tweak Town, Tom's Hardware, etc. All the whiners are acting as if they have to be paying registered users of other sites to view their reviews, so they want PC Per to benchmark the games they wanted to see. Literally no intelligent person gets their news / product reviews from a single source. There is nothing wrong with this review. What is the problem is that the results don't fit the whiners agenda. So what would you have Ryan do, find the 1 or 2 games that AMD can compete against the 1080 with and only benchmark them. I'm not sure there are that many games the Fury X could compete with the 1080. So that review would be very interesting, here are all the new technologies that come with this new GPU, these are the hardware specs, oh and sorry no benchmarks today (whispers behind his hand "because AMD couldn't beat or compete against it"), so sorry.
As to pricing, sure it could be cheaper, so could Tesla's Model S, or anything that's new and awesome. However that's not how things work. If you don't like the pricing, don't buy one. Free market will dictate how much they go for. So if nVidia finds these are sitting on the shelf for a month with little turnover, then the price will come down. Don't hold your breath for that to happen though. These $700 Founders Editions are going to fly off the shelf. For one reason, anyone wanting to water cool these right away is going to want them. It's almost guaranteed that there will be waterblocks for these "reference" cards on day 1. Even if nVidia had to run the fan at 100% to hit 2.1GHz, that's still on air. I can't believe anyone is complaining about a GPU that can run @ 2.1GHz on air. Before this, that sort of clocking required serious cooling, at the very least custom watercooling or more likely LN2. So if it'll run @ 2.1GHz on air with the fan at maximum, what can we expect it to run under water? The benchmark kings must be creaming their jeans right now. They are shining their LN2 pots right now just waiting to get their hands on a 1080. How long before we start seeing Fire Strike Ultra scores showing up at the top of the benchmark charts with GPU clocks north of 3GHz?!?
I for one am not disappointed. This thing is a beast. For those wanting more, just wait for the 1080Ti. I'd tell you to wait for Polaris 10, but what would be the point? They've already said these cards are going to be mid level cards with a reduced power profile. Not much to get excited about. Also, if rumors are to be believed, we may not see Polaris until October now. According to the rumors Polaris failed validation, so it looks like it might need another silicon spin.
I'm not trying to beat up on AMD, I root for all the new hardware. I really liked the 390/X series, the Fury series was also decent when it released. But now they aren't the new shiny anymore. Obviously their prices are going to need to come down, especially when the $600 1080's appear in a month and half. Timing isn't great for AMD, they didn't get nearly long enough after the release of the Fury series being on top. They really needed another 6 months. Unfortunately for them, their release schedule for new high end products are spaced out too far apart. It always seems that when AMD releases the new top dog, nVidia is right around the corner with something to slap it back down. Then nVidia releases something even faster before AMD has even had a chance to respond to the previous release.
TL;DR
The asynchronous compute still sounds like it will be mostly unusable. Even if it can preempt at the instruction level, it sounds like it still needs an expensive context switch. A100 microseconds seems like a really slow context switch comparatively speaking.
It would be great to get more in depth analysis of this without spin. It seemed to me that multi-GPU was going to be implemented partially with asynchronous compute going forward. With the asynchronous compute that AMD has implemented for years, compute can be handled by any available compute resource, regardless of location. They could even run on an integrated GPU to supplement the main GPU.
The current techniques are probably a bit of a dead end. AFR and split screen type rendering just doesn't scale well. Splitting up the workload with finer granularity will be required to make good use of multiple GPUs. Nvidia is holding back the market in this case. If developers want to support multi-GPU on the large installed Nvidia base, then they will not be able to use asynchronous compute to achieve it. Hopefully the market can move forward with asynchronous compute independently of Nvidia due to the installed base in all of the consoles. It will be worthwhile to implement asynchronous compute features for the consoles, so PC ports for AMD cards can hopefully make use of that.
The multi-projection stuff seems interesting, but it also seems like something that can probably be done quite efficiently in software. It would be good if you can get info from game developers on how this will compare to software solutions. I tend to think that VR will be best served by multi-GPU set-ups, as long as the software support is there. Nvidia seems to have gone the big GPU route, so it is not in their best interest to support multi-GPU set-ups; this opens them to more competition from makers of smaller GPUs. This may not only be AMD going forward, especially in mobile.
Really comprehensive analysis here team, thanks for putting it together! Videos with Tom are quite informative too. It will be hard to justify the price for me personally, but image how great it would fold! 1070 might be more for me ;)
1. HDR - what kind ? Dolby Vision ? Standard Dynamic Range (SDR) ?
or HDR10 ?
the HDR have in the card 1000 nits or 4000 nits ?
Unregistered As usual what supports the card's hardware or software ?
2. in the picture
http://www.guru3d.com/index.php?ct=a...=file&id=21784
Write Contras over 10,000 :1 with nvidia 1080
In other words previous video cards we received the Contras 2000:1 ???
3. What color system supports nvidia 1080 video card ?
Rec.2020 color gamut or rec P3 ? or only rec 709 ?
http://www.guru3d.com/articles_pages..._review,2.html
If anyone has an answer that will bring links
It is not question of logic
Any GPU can support any color system - the catch is having enough additional bit depth available to prevent obvious banding when operating in that expanded system. Support for the color system itself really boils down to the display capability and calibration.
It is interesting to note that Ashes disables Async Compute if it detects an nVidia card - including Pascal, so we still don't have an accurate representation of how nVidia really does in that benchmark/game.
I'm upgrading from my 970 to this beast of a card. Simply brilliant for new game's.
Post new comment