NVIDIA GeForce GTX 1080 and GTX 1070 Announced

Subject: Graphics Cards | May 6, 2016 - 10:38 PM |
Tagged: pascal, nvidia, GTX 1080, gtx 1070, GP104, geforce

So NVIDIA has announced their next generation of graphics processors, based on the Pascal architecture. They introduced it as “a new king,” because they claim that it is faster than the Titan X, even at a lower power. It will be available “around the world” on May 27th for $599 USD (MSRP). The GTX 1070 was also announced, with slightly reduced specifications, and it will be available on June 10th for $379 USD (MSRP).

View Full Size

Pascal is created on the 16nm process at TSMC, which gives them a lot of headroom. They have fewer shaders than the Titan X, but with a significantly higher clock rate. It also uses GDDR5X, which is an incremental improvement over GDDR5. We knew it wasn't going to use HBM2.0, like Big Pascal does, but it's interesting that they did not stick with old, reliable GDDR5.

View Full Size

View Full Size

The full specifications of the GTX 1080 are as follows:

  • 2560 CUDA Cores
  • 1607 MHz Base Clock (8.2 TFLOPs)
  • 1733 MHz Boost Clock (8.9 TFLOPs)
  • 8GB GDDR5X Memory at 320 GB/s (256-bit)
  • 180W Listed Power (Update: uses 1x 8-pin power)

We do not currently have the specifications of the GTX 1070, apart from it being 6.5 TFLOPs.

View Full Size

It also looks like it has five display outputs: 3x DisplayPort 1.2, which are “ready” for 1.3 and 1.4, 1x HDMI 2.0b, and 1x DL-DVI. They do not explicitly state that all three DisplayPorts will run on the same standard, even though that seems likely. They also do not state whether all five outputs can be used simultaneously, but I hope that they can be.

View Full Size

They also have a new SLI bridge, called SLI HB Bridge, that is supposed to have double the bandwidth of Maxwell. I'm not sure what that will mean for multi-gpu systems, but it will probably be something we'll find out about soon.

Source: NVIDIA

Video News

May 6, 2016 | 10:50 PM - Posted by Chaitanya Shukla

As usual prices have been jacked up.

May 6, 2016 | 10:55 PM - Posted by Jimmy Dean (not verified)

Yeah, $379 for a card faster than a Titan X is a huge wallet killer. ...wait, what?

May 6, 2016 | 11:40 PM - Posted by Bezzell

We'll have to see actual benchmarks.

May 7, 2016 | 10:42 AM - Posted by Anonymous (not verified)

If benchmarks are your favorite games, these cards are for you...

May 6, 2016 | 11:35 PM - Posted by Eric (not verified)

I believe he's referencing the price relative to the previous generation with the same item in the lineup. Eg, the 970 was $299 MSRP, whereas the 1070 is $379. The 980 was $549, the 1080 is $599.

May 6, 2016 | 11:38 PM - Posted by DevilDawg (not verified)

Correction the Gtx 970 was a msrp of $329.00.

May 7, 2016 | 12:22 AM - Posted by BillDStrong

But those weren't their prices at launch.

May 7, 2016 | 12:51 AM - Posted by DevilDawg (not verified)

It did not launch at $299.00. Most of the prices were a bit higher, but $329.00 was the launch msrp price.

May 14, 2016 | 08:59 AM - Posted by theBrayn

You neglect to mention that the 770 launched at $399. An architecture change will always be higher than a refinement.

May 7, 2016 | 01:20 AM - Posted by JxcelDolghmQ (not verified)

Nvidia really struck marketing gold with the titan idea: if you create a ludicrously priced joke card, people will praise your other prices no matter how high they are. $600-700 single GPU cards didn't exist not too long ago (aside from professional cards), but now, not only are they accepted, some people even call them a good value.


Maybe AMD should try something similar. That should help their margins. Or, better yet, maybe GPU customers could stop being such absolute pushovers.

May 7, 2016 | 01:28 AM - Posted by Anonymous (not verified)

"Maybe AMD should try something similar." - Pro Duo says hello!

May 7, 2016 | 02:21 AM - Posted by JxcelDolghmQ (not verified)

That's dual GPU. It also runs workstation drivers, and those cards have always been expensive.

May 7, 2016 | 05:13 PM - Posted by Anonymous (not verified)

But somehow they advertise it as "For Gamers Who Create and Creators Who Game" this doesn't sound as for workstations only slogan to me.

May 7, 2016 | 06:26 PM - Posted by Anonymous (not verified)

With the option for using the pro drivers and not having to pay $4000+ for the pro SKUs, it's a great deal for a creator to use and save some serious dosh! Those pro drivers cost a lot to develop and certify for the pro graphics packages, so that in itself if reason enough. The gaming drivers are gimped down for speed at the cost of accuracy, and without the pro drivers targeting the pro applications and Pro VR markets are impossible!

So if you want to game on that AMD Pro Duo SKU install the gaming drivers, for pro development work install the pro drivers. There will be plenty of pro VR applications for engineering work in 3D virtual environments! Look at the Boeing 777, lots of 3d VR simulations even for maintenance routines where maintenance workers simulated tasks in 3D virtual environments and designs where changed to make maintenance work easier and faster!

So that AMD dual pro is only $1500, and can be use to develop software for cards that cost $4000+, the only difference is that the costly FirePro versions come with error correction/other error correction features that are in the FirePro SKU's hardware, so you can not use the Radeon Pro Duo for production work, but it can be used for developing software that can be used by the more costly FirePro branded SKUs, that can/have to be used for production work(Engineering/other work where errors can cost lives and that error correction is needed/required).

May 7, 2016 | 06:40 PM - Posted by arbiter

Yea Titan was always a card that was for pro's that still want to game but it still got slammed for its price so well Over priced fury pro can take the same lump's.

May 7, 2016 | 07:12 PM - Posted by Anonymous (not verified)

It's still lower than what that Titan Z($3000) SKU cost when it was introduced! So where your lumps of cash now, in JHH's bank account! The AMD dual pro is a good deal more affordable than the Titan Z(Even now)!

May 7, 2016 | 09:24 PM - Posted by Anonymous (not verified)

Well if you put it this way I would happily buy TWO fire Xes instead

May 7, 2016 | 01:56 AM - Posted by Anonymous (not verified)

Yep that's precisely the way you structure pricing. You make the top end way more expensive for little more performance, you make the low end way too expensive for no performance and you guide everyone towards the mid-size chips with the highest yield which you make the most profit on. The high and low end could be way cheaper for the performance they give, but that would mean your mid-size chips didn't look as good value.

May 7, 2016 | 02:28 AM - Posted by JxcelDolghmQ (not verified)

Sure, the top end was always relatively poor value, but Nvidia really took that to another level with the titan cards. Also, in the past, people complained about the $600 GPUs instead of praising them as a good value. Some of the comments on this article demonstrate just how effective the titan marketing was.

May 7, 2016 | 10:18 AM - Posted by Anonymous (not verified)

Don't forget about that other part of Nvidia's product line segmentation! With Nvidia reducing the compute on its consumer SKUs and marketing the power savings. That one will not work for the VR gaming segment where the gaming engines will be accelerating on the GPU more of the non gaming compute traditionally done on the CPU! So the VR gaming engines in order to get that CPU to GPU communication induced latency issues, inherent over PCI/Other protocols, down to as little as possible will be moving as much of the VR games' Graphics/Gaming Compute onto the GPU and using that to keep the latency(CPU to GPU over PCI) down to a minimum. Those VR frame-rates have to be as high as possible to make those cookie tossing incidents as nonexistent as they can realistically be!

Nvidia has already taken some steps towards improving the fine grained GPU processor thread scheduling and dispatching on its P100 line of HPC GPU accelerators and just how much of that improvement is going to make it into the consumer SKUs is still unanswered. AMD has not been skimping on the asynchronous compute features of its GCN based SKUs and has in fact been improving those GCN ACE units with every new update/generation of GCN.

With DX11 and before, Nvidia has not been at much of a disadvantage for not having the asynchronous compute features fully implemented in its consumer GPU's hardware, but with DX12/Vulkan and the VR gaming market just starting up Nvidia will have to bring those hardware features down into its consumer SKUs to compete in the VR gaming market.

There is only so much latency hiding that can be done in software/VR middleware outside of having the asynchronous compute features Fully implemented in the GPU's hardware. There is much less room for any latency in the high VR frame-rates that are necessary for VR gaming and for keeping folks off of Dramamine!

May 7, 2016 | 11:58 AM - Posted by Anonymous (not verified)

P.S. You "Technology reporters" don't use the acronym SMP (Symmetric multiprocessing)(1) for Nvidia's Simultaneous Multi-Projection it's going to lead to lots of confusion with Google searches that lead to articles on SMP(Symmetric multiprocessing), even some Nvidia white papars on SMP(Symmetric multiprocessing)!

Call Nvidia's Simultaneous Multi-Projection(S-MP, or somthing else for short) at least until wikipedia gets a proper SMT (disambiguation entry for SMT, the projection kind of Nvidia's new nomenclature)!

Even on Anandtech they are calling it SMT for short, and the damn overloading of computing acronyms is going to cause confusion even for folks with good Google-Fu skills!


May 8, 2016 | 05:42 PM - Posted by Anonymous (not verified)

That market segmentation is somewhat of a separate thing from releasing something like the TitanX. There was almost no reason for any gamer to buy a Titan X. It mostly just got them a massive amount of publicity that makes AMD cards look bad in comparison, even though the Nvidia part wasn't a comparable priced product, even if it can be considered a real product. How much of the market is even 980 TIs, much less Titan Xs? It is also designed to reinforce the idea of Nvidia as the high end maker and AMD as the low end. That is nothing but marketing though. For almost all price points, AMD represents the better value most of the time. This is like comparing an auto company that makes a sports car that cost hundreds of thousands of dollars against one that does not. The existence of that sports car might be good marketing, but it is foolish to let it sway your opinion when shopping for a reasonably priced sedan.

May 7, 2016 | 02:16 AM - Posted by Marquzz (not verified)

Tell that to my wallet when I spent that sum on an 8800 GTX. Memory is short...

May 7, 2016 | 02:41 AM - Posted by Anonymous (not verified)

Um the 8800 ultra launched at 830+ USD. And if you go back even further the rade,on x850 xt platinum was 750 usd and that was in like 2005.

May 7, 2016 | 02:57 AM - Posted by JxcelDolghmQ (not verified)

Fair enough, they did exist. But people didn't praise them as a good value the way some do today. Nvidia's marketing is still undeniably successful.

May 9, 2016 | 01:08 PM - Posted by Anonymous (not verified)

No one is exactly praising these as a good value by themselves, they are a "good value" compared to current products because they offer a significant performance increase at a minimal cost increase. Its normal for the next generation of any product to be a better value then the previous.

May 7, 2016 | 09:30 AM - Posted by Debx (not verified)

You pasted the same comment in many other places, are you working for NV marketing ?

if not then you should never compare the old gen gpus with the new ones, ofc it MUST be faster and the GTX 970 was faster with much lower power consumption that the original Titan card.

380$ for the market that its targeting is way too much!

May 7, 2016 | 09:31 AM - Posted by Debx (not verified)

*than the original titan

May 8, 2016 | 12:24 AM - Posted by BlackDove (not verified)

WTF no they haven't. The 80 series cards have been that price for years, and if you look at REAL inflation numbers and the purchasing power of the dollar, they're cheaper.

May 10, 2016 | 09:47 AM - Posted by theBrayn

It's almost as if they have to recoup the cost of new architecture R&D or something. OUTRAGEOUS!!!

May 6, 2016 | 10:56 PM - Posted by Anonymous (not verified)

No word on DX12 game performance ?

May 7, 2016 | 03:59 AM - Posted by renz (not verified)

They only mention/show tomb raider for dx12 games. Which is one of dx12 title where geforece can be faster than radeon right now. Dx12 things probably doesn't change much. Nvidia might using brute force to negate the percormance difference in dx12. Also depending on which side sponsoring the games we will see the tide goes between nvidia and amd fron time to time; means nvidia will be faster dx12 title than sponsored by them.

May 6, 2016 | 11:00 PM - Posted by Anonymous (not verified)

Any mention of asynchronous compute specs?

May 6, 2016 | 11:00 PM - Posted by JonnyBigBoss

The 1080 is so amazing. I want it now!

May 7, 2016 | 01:21 AM - Posted by FallenBytes (not verified)


May 6, 2016 | 11:02 PM - Posted by Venn Stone (not verified)

Soooo will a low-profile 950Ti only be slightly faster than a TITAN X?

May 6, 2016 | 11:02 PM - Posted by Anon (not verified)

Nvidia just took a dump in AMD's mouth

May 6, 2016 | 11:03 PM - Posted by repsup100 (not verified)

AMD Polaris price $13.99 and a hug good bye

May 6, 2016 | 11:07 PM - Posted by Anonymous (not verified)

Any news on support for Adaptive Sync or do you still have to buy the other half of the card with a monitor?

May 6, 2016 | 11:15 PM - Posted by Anonymous (not verified)

If you read the Nvidia 1080 page.

2 - DisplayPort 1.2 Certified, DisplayPort 1.3/1.4 Ready

Don't think they'll be in a rush to support it since its not 1.2a or above certified.

Ready usually means compatible which DisplayPort is backwards compatible anyways.

May 6, 2016 | 11:41 PM - Posted by Klyde (not verified)

This confuses me, is it DP 1.4 or not?

May 6, 2016 | 11:54 PM - Posted by arbiter

Um adaptive sync isn't required. Its Optional part of DP so it can be certified for DP 1.4 without it.

May 6, 2016 | 11:57 PM - Posted by Anonymous (not verified)

but its not certified for 1.3 nor 1.4

May 6, 2016 | 11:08 PM - Posted by Alamo

1080 2xTitanX performance and 3x efficiency, thats a bald claim, i hope this is not another PR coup, and end up with underwhelming results.
the show was terrific thou, AMD in full Panic Mode i bet, it's a pretty hard sell for them now, their efficiency and prices need to reflect the vast shift in performance, and they need to undercut the 1070 price even if they somehow manage to out perform it with polaris 10.
AMD need to play it very smart and sweep the low/mid range, size up their vega.

May 6, 2016 | 11:19 PM - Posted by Alamo

omg this is so misleading, i was playing LoL while listning to the stream, peaking from time to time, but now that i saw the slides, it actualy says twice the performance on VR games, so this is taking into account 60% boost from their projection thingy, which means 1080 is not twice the performance of a titanX but around 40% faster, and even efficiency wont be 3 times on regular work without that projection thingy, feel so cheated lol

May 7, 2016 | 12:30 AM - Posted by BillDStrong

That is actually major, though. I can foresee lots of use cases for that tech that can make games better.

First, Their own examples of VR and Multiple monitors of course.

Next, How about playing Paragon in a two monitor setup, one top down and the other First Person?

Or creating large virtual rooms for parties.

How about a minimap that is rally just a zoomed out view of the map?

Now, it shouldn't affect most previous titles, unless there is a way to play them in a VR headset, but this is some very cool stuff.

May 7, 2016 | 01:30 AM - Posted by Alamo

i need to watch the stream again, but i think that is VR specific, not surround, and it doesnt boost the performance of VR by 60% it just doesnt use the extra performance used to display on 2 screens, thats why it says VR games... the projection works differently on surround, VR you get one image that you shift with projection to get for 2nd eye, surround you still need to display the added landscape.
this is how i understood it, but again i was just listening, need to rewatch, there is alot of misleading stuff, anyone just listning on audio of that event, he would have a complete different take on it once he sees the slides.

besides VR games wont need that much power, because the leading platform isnt PC, it's the PS4, and 90% of the games will be port from the PS4, and even those made on PC will be tailored to be later on ported on the PS4, you just cannot compare 1mil vive/oculus users to 50mil PSVR users, once again VR games will be stuck to whatever level the dominant market perf is (ps4)

May 7, 2016 | 04:09 AM - Posted by BillDStrong

The VR specific implementation uses 8, 4 per eye, projections that are angled to allow them to compute less of the world for the same or similar experience.

It is a simple use case for the 16 projections available in the card. Similar tricks could be done in other use cases, such as single split screen multi-player, where you know the size of the projection has changed, so fit the projection for the appropriate amount of performance.

Games developers are notorious for adapting new tech in ways it was never intended for. These projections are much more versatile than just using them in this one trick for VR. It remains to be seen how it will ultimately affect games, however.

May 7, 2016 | 05:54 PM - Posted by BillDStrong

Just read a report of one game using multiple projections for 4K. They segment the outer portion of the screen to a lower resolution, while keeping the detail in the center of the screen, where your eyes are focused most of the time.

May 7, 2016 | 11:54 PM - Posted by Alamo

i saw a video sony VR conference for developers, explaining to them how VR games should be done and what to avoid, they talked about something similar to this, i think valve also, i dont think this is it, it might be more complicated than that.

May 7, 2016 | 04:01 PM - Posted by jckaboom (not verified)

"all games maxed out" ,but at 1080p I guess.
The slide show like 15%-20% more than titanx, relative performance. Lets wait for the benchmarks.
Price still high imo.
Also, We all should ask 600.00usd for 4k@60 on all games on the planet.

May 7, 2016 | 12:27 PM - Posted by ppi (not verified)

AMD certainly is not in panic mode. nVidia released >300mm2 parts. AMD is working on reportedly 232mm2 and some ~120mm2 parts.

Therefore nVidia and AMD are effectively going to split the market. And AMD now knows nVidia priced new cards pretty high, which gives them opportunity to price higher (=profits).

Also note the press release is written so that only the $700 1080 is going to be available on 27 May.

May 7, 2016 | 06:48 PM - Posted by arbiter

Um price them pretty high? they are only 10-15% higher then pervious gen cards that these replaced. That price bump is due to chips being new and more expensive to produce. Its not an older mature node like 970/980 cards using widely avaible gddr5 ram, 1080 using GDDR5x which is not as produced in mass yet.

May 7, 2016 | 03:40 PM - Posted by Anonymous (not verified)

He was referring to 2x performance of titan x in vr using the multi blah rendering blah blah blah. The 1080 is absolutely not 2x the performance of titan x, it is around 25% on nvidias website when benchmarking dx11/12 games.

May 6, 2016 | 11:10 PM - Posted by Anonymous (not verified)

What about the fine grained GPU procesor thread scheduling on these consumer SKUs? The P100 has improved thread scheduling what about these consumer SKUs. More about the actual hardware please, it's not only just the benchmarks, more actual hardware information please!

May 6, 2016 | 11:14 PM - Posted by Anonymous (not verified)

What is Founders Version that is $100 more? Is that instead of a higher specked TI version?

May 6, 2016 | 11:35 PM - Posted by biohazard918

I think its just clocked higher or some kind of over clocking card its most likely the same core. It could also come with a box of swag or something.

May 6, 2016 | 11:43 PM - Posted by Anonymous (not verified)

Reference design price.

May 7, 2016 | 01:30 PM - Posted by quest4glory

From what I could gather, it's not just a "reference design price", it's the price for the new reference cooler which will not be available from anyplace other than Nvidia. So the AIB partners will be able to take the reference design and slap custom coolers on them and charge less, or more, or whatever they want to do.

May 8, 2016 | 11:23 PM - Posted by quest4glory

Looks like I was right, as confirmed today by Gamers Nexus.

May 6, 2016 | 11:55 PM - Posted by arbiter

founders one has more unlocked overclocking possibilities and likely better binned chip to get higher clocks

May 6, 2016 | 11:32 PM - Posted by Anonymous (not verified)

shiiiiit im getting a 1080

May 7, 2016 | 12:06 AM - Posted by Sunain (not verified)

I have a 650 watt psu. Do you think with the lower TDP of these cards that SLI is now feasible with that PSU?

May 7, 2016 | 12:15 AM - Posted by Ruben Ferreira (not verified)

Why don't you just get a 800 - 1000 watt power supply? You are talking about spending $1200 on video cards. Whats an extra 120 for a more than powerful enough PSU.

May 7, 2016 | 03:36 AM - Posted by Anonymous (not verified)

Seriously, what a stupid thought process. And no, you sure as fuck SHOULDN'T run SLI 1070's, or 1080's on a single 650w PSU Dipshit, duh.

May 7, 2016 | 03:08 PM - Posted by Kevin (not verified)

I think you could have made your point without all the personal insults, which there is no call for.

May 7, 2016 | 09:39 PM - Posted by Anonymous (not verified)

1080 has an 8pin power connector = 2(150)+25(2) = 350 W
I7 6700k = about 91W

Your other components are going to use less than 100W

So a single 650W would handle 2 1080s or 2 1070's dipshit.

Also go fuck yourself for being a champion asshole.

May 7, 2016 | 04:07 AM - Posted by NamelessTed

Given that the GTX 1080 is rated considerably lower TDP than a 980Ti I would say you will be fine running SLI at 650W PSU given the few benchmarks that I found online.




Several of the results show the entire system draw of a rig with 2x 980Tis in SLI is in the 500-550W range depending on the benchmark/game running. The last source shows the ASUS MATRIX cards actually requiring 721W but the Strix cards in SLI result in 644W. The TDP of the 1080 is considerably lower than the 980Ti, 180W vs 250W.

Of course, I would wait for reviews. I am sure somebody will have benchmarks up soon, and there should hopefully be one or two sources that will show an entire system draw in SLI.

May 7, 2016 | 01:30 PM - Posted by Steve-o-rino (not verified)

You seem to be in the know...if you have a minute..

I have a new 2016/Alienware X51 R3 with the GTX 970. My question is will the 1070 have the same or lower power draw than the 970? The 970 is 140 W

May 8, 2016 | 03:18 AM - Posted by Anonymous (not verified)

The efficiency of the PSU should be considered too, though it's not like it's going to be pulling the full 650 watts at all times just because the cards are running.

May 7, 2016 | 12:10 AM - Posted by Funkatronis

Considering I'm running a GTX 690 at this point, I think this will be a great upgrade to take .... honestly it would be great if they did a 1090, but I doubt that will happen as the Titan Black I believe was the last dual chip Nvidia GPU. Also since I'm not yet running full 4K, it should be a very nice bump to games running in 3440x1440.

one question about the DP. If something is ready and not certified, does that mean they can drop features. Really the exciting thing is the ability to actually see HDR, even if the games are using it we still can't see it as the monitors and Windows(?) doesn't support HDR at this point.

May 7, 2016 | 12:45 AM - Posted by Anonymous (not verified)

To be certified you have to be compliant with all the latest features. DP is backwards compatible. You cant skip features and thus be certified with the latest and not the ones below.

Gsync works on DP 1.2 and the way they are using it might be causing them issues to be certified with anything above that.

May 7, 2016 | 03:29 AM - Posted by Anonymous (not verified)

Yeah cause SLI works in what, 3 games now? LOL I mean, I have 3/4-way Titans, and 2-3-way 980Ti's. Unless I'm playing BF4 on my 144Hz BenQ XL2420G, I'm not enabling SLI.

May 7, 2016 | 06:52 PM - Posted by arbiter

Works in more and better then CF does.

May 8, 2016 | 05:41 PM - Posted by Anonymous (not verified)

you have 3/4 card and 2-3 cards. i dont think you have either

May 7, 2016 | 11:32 PM - Posted by Aberkae (not verified)

One 1080 is like 2 690s in quad sli if not better.

May 7, 2016 | 12:11 AM - Posted by Tobi (not verified)

I was low-key kinda hoping for benchmarks today. What's good pcper?

May 7, 2016 | 12:33 AM - Posted by BillDStrong

Since Ryan is on vacation, even if they had the cards, and I wouldn't expect that before a week before release, they haven't had time to do the benchmarks yet. Ryan ran out of time to review all the games he wanted to for the Radeon Pro Duo, let alone a card that has just been announced.

May 7, 2016 | 12:47 AM - Posted by Anonymous (not verified)

Wasn't Ryan at this event? Vacation for Ryan means going to Nvidia events.

May 7, 2016 | 11:35 AM - Posted by BillDStrong

Nope, Ryan went to a beach somewhere, at least according to last week's podcast.

May 8, 2016 | 04:21 PM - Posted by Anonymous (not verified)

Ryan could be at some BENCH, doing some sort of BENCH like things under some sort of you are not authorized to say YET sort of agreement! He may be playing volleyball at this BENCH, but it's mostly involving volleys of various sized and shapes of objects that may or may not be actual volleyballs, but could in fact be volleyball shaped things, or other more streamlined objects with various nougat like fillings wrapped in full metal jackets that when set off expand at beyond the supersonic/hypersonic speed regiments associated with NOT staying in one piece!

In fact Ryan could in fact be at the beach, at some BENCH, and doing various things one does at the beach, at some BENCH! And while at that BENCH at the Beach his activities may involve some or all of the things stated in the first paragraph of this post!

May 7, 2016 | 12:34 AM - Posted by Anonymous (not verified)

That cooling shroud is Transformers af but the rear plate is damn sexy.

May 7, 2016 | 01:27 AM - Posted by Anonymous (not verified)

nah, it looks like something batman would drive.

May 7, 2016 | 01:47 AM - Posted by Anonymous (not verified)

Lucius Fox would never make such an atrocity.

May 7, 2016 | 01:10 AM - Posted by Anonymous (not verified)

The question for me is whether or not the performance warrants an upgrade from my existing 980ti or not.

May 7, 2016 | 01:34 AM - Posted by JxcelDolghmQ (not verified)

Probably not, unless this card overclocks extremely well, which is a slight possibility as we don't yet know much about this new process. Upgrading to consecutive generations is never really "worth it".

May 7, 2016 | 01:45 AM - Posted by Anonymous (not verified)

Unless you're going VR right away, it would be better to wait for GP100/HBM2 based cards, if you're going to upgrade.
I have a 980Ti as well... And plan to wait... though that new 1080 is really tempting for lower heat generation...

May 7, 2016 | 06:12 AM - Posted by John H (not verified)

The VR 2x claim has an asterisk - when using Pascal special features. This means a game must support those features.. So just like their earky VR SLI claim this is also not going to be seen anytime soon. 980ti to 1080 will be no different in VR than on the flat screen for a while IMO.. Wait for 1080ti ..

May 7, 2016 | 06:53 PM - Posted by arbiter

a 980ti, no it doesn't you would have to wait for 1080 ti before could say its worth it.

May 8, 2016 | 12:43 AM - Posted by renz (not verified)

I believe this card targeted more towards 980 and below. But there are some people that willing to upgrade even if the performance increase was only a mere 10%. Personally own 970 (gaming on 1080p monitor) but i wonder what do i want to play with that much performanve lol. Some of the game that i played simply have performance issue that regardless of how powerful your hardware are (Xcom2) and right now i'm replaying Dishonored. maybe i won't upgrade until 2018 time frame.

May 8, 2016 | 11:10 AM - Posted by ppi (not verified)

Exactly my thoughts. As long as it allows me to play smoothly at (near-)max settings, no reason to upgrade.

I expect to upgrade GPU when I upgrade to 4K panel.

May 8, 2016 | 08:03 AM - Posted by Aberkae (not verified)

It seems based on cores and frequency the 1080 is about 25% better than a 980ti and 15% than non reference 980ti like my amp extreme. But im pretty sure the non refernce 1080s will be beastly.

May 7, 2016 | 01:29 AM - Posted by JxcelDolghmQ (not verified)

Disappointing to see only DVI-D on this card. Some people still use VGA-only monitors as secondary displays.

It would be nice to have some kind of modular/customizable/DIY display outputs, since people's needs can vary so widely and cards usually converge on one jack-of-all-trades setup.

May 7, 2016 | 02:23 AM - Posted by johnc (not verified)

Those some people need to dump their VGA-only monitors. You can't even get a sharp picture above a certain resolution on VGA, and digital monitors are cheap.

The DVI / VGA adapters work well enough though.

May 7, 2016 | 02:51 AM - Posted by JxcelDolghmQ (not verified)

That's not really true. As with all analog electronics, the quality depends on the hardware and implementation. Cheap monitors might not run well at high resolution (for a variety of reasons), but some equipment (high-end CRT projectors) could run nearly 4k (3200x2560) over VGA. I'm typing this right now at 1440p over VGA.

VGA-only monitors may not be worth buying today, and almost anyone buying a new mid or high-end GPU will have something better to use as a primary display, but that's no reason to discard a monitor that still works fine and can be used as a secondary or tertiary display. Cheap monitors exist, but they have their own problems and nothing is cheaper than just continuing to use what you already have. Monitors become obsolete very slowly, especially because you can use several at a time, so you can just relegate older ones to less demanding roles. Also, modern monitors look terrible when displaying anything but their native resolution, so if you play any older games, you would want to have another monitor than can run at their resolution.

Converters can be expensive, and only support limited resolutions (generally 2048x1536 at 60hz).

The great thing about PC building is that there are generally countless options so you can set up exactly what you want, and nothing more. This is just one area that needs some improvement in terms of options.

May 7, 2016 | 04:14 AM - Posted by NamelessTed

Is it possible to use a second GPU that has VGA and output to a second monitor using it?

May 7, 2016 | 11:57 AM - Posted by Anonymous (not verified)

On AMD yes, probably on Nvida as well, if you have the space, power, and money for a second GPU, and if both cards use the same driver set. That's probably the best solution, if it's possible.

May 8, 2016 | 03:54 PM - Posted by Scott Michaud

I used to be able to do it on my Intel iGPU, but this motherboard (for Devil's Canyon) has digital-only outputs. Also, Skylake retired VGA support, so it won't be an option going forward.

May 8, 2016 | 03:25 AM - Posted by Anonymous (not verified)

They can buy active or passive adapters to fill that gap. I find it a little odd to use such a card with an older spec, but if you've got legacy hardware laying around...

May 7, 2016 | 01:44 AM - Posted by Anonymous (not verified)

Here I go imagining how I'll play games to justify getting one again.

May 7, 2016 | 04:47 AM - Posted by Daniel Nielsen (not verified)

I like PC gaming, but i can honestly say that i loathe most of you ignorant know it all's, who dominate internet comment boards all over the net.

Reading these comments is equally as useful as jumping into a dumpster that's on fire.

May 7, 2016 | 07:22 AM - Posted by darknas-36

I'll get the GTX 1070 and game on my 1080p 21.5 monitor and show a video doing it so I can piss people off how not to play games on a badass video card :-p

May 7, 2016 | 07:31 AM - Posted by Anonymous (not verified)

Holy shit this is big@!

May 7, 2016 | 08:57 AM - Posted by donut (not verified)


May 7, 2016 | 07:31 AM - Posted by Anonymous (not verified)

Glad to finally see 8GB VRAM on mid range card form Nvidia.
AMD Polaris for its own sake better go to 11 ;-)

May 7, 2016 | 12:02 PM - Posted by Anonymous (not verified)

Oh it goes to 11.

May 7, 2016 | 06:57 PM - Posted by arbiter

AMD didn't have 8gb on midrange cards either, 390 cards are not mid range even though people want to claim they are.

May 7, 2016 | 08:29 AM - Posted by Anonymous (not verified)

GTX600 = GTX1000 = scam

May 7, 2016 | 01:06 PM - Posted by Anonymous (not verified)

I don't understand what this comment is trying to say.

May 7, 2016 | 08:22 PM - Posted by jckaboom (not verified)

I dont know about the gtx600 scam, but this in theory should cut the prices by a lot on 900series.
i just check the 4k benchmarks of Digital Foudry for 908ti vs titanx vs 980sli, and 1080 looks to me like a more power efficient 980ti. No more.
in the benchmark 980ti the OC version is equal or faster than titanx , 980sli won in most games, so if we put 980sli and 980ti OC on that slide it will show little performance gain.

May 7, 2016 | 08:56 AM - Posted by donut (not verified)

So does the NDA lift before May 27th?

May 9, 2016 | 08:20 AM - Posted by svnowviwvn

NDA lifts on May 17th


May 7, 2016 | 10:03 AM - Posted by TwelfthAlias (not verified)

Something doesn't add up here for the 1080. Micron still state on their website that DDR5X still isn't in mass production so how can NVidia have enough chips? Sounds like very limited availability for the 1080 initially.
I guess they could have some kind of exclusivity deal? Who knows.. I guess we'll find out soon enough.

Both the 1070 and 1080 look like great cards though. Kudos to Nvidia.

May 7, 2016 | 10:50 AM - Posted by Anonymous (not verified)

Paper launch isn't new for this kind of luxury products...

May 11, 2016 | 02:19 AM - Posted by Benson (not verified)

Seems this may be a major reason we have the expensive 'Founders' edition and no AIB until the middle of June at the earliest.

The founders edition 1080 at $699 means Nvidia have a monopoly for a month and will most likely make a profit selling reference cards for the first time since the GTX 680, amazingly Nvidia often lose money on reference cards.

This monopoly has given Nvidia extra time to squire a supply of GDDR5X to meet AIB's needs, while the price itself slows sales allowing Nvidia to ensure they themselves have adequate GDDR5X supply for the 1080 Founders. This of course is not the only factor affecting price, the Founders MRSP is also clearly set high at $699 to politically appease AIB partners, allowing Nvidia a monopoly while minimizing any relationship damage or financial harm to AIB's. Imagine how AIB partners would react if they lost millions in revenue directly due to Nvidia deciding to maximise month 1 sales and use the monopoly to offer the GTX 1080 at the normal lower reference card prices?

Regardless I have a funny feeling the Founders price of both the 1080 and 1070 is going to cause problems and possibly backfire - rightly or wrongly Nvidia understandably feel entitled to create the Founders cards due to the $2 Billion development cost of the new architecture, new 16nm process, new cards and new software - yet as the Founders cards are still nothing more than reference cards, how does Nvidia expect AIB partners to fill the price gap between the Founders MSRP $699 and lower MSRP $599? The only way to fill this price point is if AIB's use cheaper components, substandard to the reference card - yet we all know most custom cards improve on reference specs, offer higher clocks, more power phases, better cooling etc. Nvidia really expects AIB's to create cheap cards using cheaper than reference parts?

So either AIB's such as ASUS, EVGA, MSI, Galax, invest R&D, and testing into creating cheaper cards substandard to reference or consumers dont see cards below $699. So either AIB image will be damaged by cheap product or consumer wallets will be damaged by Nvidia decision to create the Founders Edition.

I think the most likely outcome is sub $699 cards will not be seen for a long while, they will not appear until manufacturing processes are further refined - such as increased chip yields, and PCH development such as miniaturization, allowing greater efficiency with lower material costs without compromise in quality.

May 7, 2016 | 12:03 PM - Posted by sixstringrick

1733 MHz Boost Clock. Who else thinks we will be seeing 2Ghz OC cards in the next generation?

May 7, 2016 | 03:40 PM - Posted by Anonymous (not verified)

you should have looked into this a bit more. They demonstrated a 1080 ostensibly doing 2.1 ghz. That's with a 180 TDP from a single 8-pin power connector. That a max of 225 watts, 150 from the 8 -pin and 75 from the Mobo.

Not a ton of overhead for that kind of overclock. Some of the custom OC cards will have significantly more overhead for OC'ing. I think we could see N2 OC get near 3 ghz, which would be nuts.

May 7, 2016 | 03:56 PM - Posted by Anonymous (not verified)

You can't base power consumption on the connector layout; cards can violate the standard, and frequently do when overclocking.

May 7, 2016 | 07:01 PM - Posted by arbiter

All you do is point out the 295x2, that card would draw as much as 250watts from each 8pin it had. Card could draw up to 600watts with only 2x8pin.

May 7, 2016 | 12:07 PM - Posted by Alamo

i hope ryan focuses on 1080 vs furyx/390x on his benchs, instead of 980ti/X, because the state of maxwell drivers lately make me think it might have been gimped specifically for this purpose.
anyone else feel the same ? or am i reading too much into it ?

May 7, 2016 | 04:05 PM - Posted by Randy (not verified)

I can't to see some benchmarks from all the youtube tech channels. I am most impressed by the fact that these cards have one 8 pin power connector and the performance that they put out.

May 7, 2016 | 04:31 PM - Posted by Shambles (not verified)

Other than the price and the exact date of release there wasn't much useful information given out, just another night of non-stop fluff. We already knew the names and new we'd see them in June. Wake me up when we get to see the real performance.

May 7, 2016 | 04:31 PM - Posted by Shambles (not verified)

new=knew :L

May 7, 2016 | 06:23 PM - Posted by siriq

New series looks OK to me. I still wanna see more proper drivers. Not this , will it work or not versions.

May 7, 2016 | 06:37 PM - Posted by Rich (not verified)

I'm excited to see this proclamation of graphics power and only using a single 8-pin power connector. If this holds true at launch it would be amazing. Just going to the 16nm node would save power, and fin-fet does as well so I was expecting some vast improvements, but this power/performance, if it holds true, could very well be a game changer.

I'm still going to wait for Polaris, and reviews by people I trust (your trusted reviewers may be different) before making my next GPU purchase. I hope AMD does well, but they don't have an anchor on my soul, maybe a drag-line though. (I'll probably not be waiting for the 'big' versions of the chips at the end of the year/2017, the better price/perf with easy driving of 4k would probably get my money.)

May 7, 2016 | 07:07 PM - Posted by Lance Ripplinger (not verified)

The typical price gouging from Nvdia. Yay! I hope the GTX 9xx series drops in price. If there is a fire sale to move off the remaining inventory, I might just scoop up a 970.

May 7, 2016 | 11:25 PM - Posted by Keith Pemberton (not verified)

Any idea if there will be a pre-order available for these cards and when that might start?

May 8, 2016 | 09:16 AM - Posted by Anonymous (not verified)

Yes. I'm taking pre-orders now, no joke.

May 8, 2016 | 11:18 AM - Posted by wolsty7

as far as the review goes id specifically like to know more about:

-exactly how GPU Boost 3.0 compares to 2.0

-how the extra bandwidth of the sli bridge affects especially for non AFR modes (Explicit Multi-Adapter)

-To what degree over clocking is affected by water cooling (stretch goal haha)

May 8, 2016 | 04:47 PM - Posted by Cantelopia (not verified)

I'm holding out for the GTX 1440 since I'm planning on buying a QHD gaming monitor soon.

May 8, 2016 | 08:51 PM - Posted by Anonymous (not verified)

Your lucky,

You don't have to wait as long as people wanting to play at 4320.

May 9, 2016 | 06:53 AM - Posted by Anonymous (not verified)

what is " three DisplayPort 1.2 ports (1.4 “ready” with 4K at 120Hz) " ?
this DP 1.2 or 1.4 ?
what is 1.4 “ready” with 4K at 120Hz ?

May 9, 2016 | 09:49 AM - Posted by Anonymous (not verified)

Also when are DisplayPort 1.4 4k @120Hz HDR monitors coming out so we can use the full potential of this card since it's DisplayPort 1.4 "ready".

May 9, 2016 | 06:54 AM - Posted by Anonymous (not verified)

what with HDR ?
1000nits or 2000nits ? or ?

May 9, 2016 | 08:29 AM - Posted by thedude

I'm building a new gaming PC right now and have collected all parts except for the GPU. I've hit the timing right since I didn't have to wait too long for the new cards to come out. I'm partial to NVIDIA 1080, but I'm excited to see what AMD has up their sleeves too. Can't wait!!!

May 9, 2016 | 11:25 AM - Posted by Anonymous (not verified)

Nvidia has better thread scheduling granularity now on its GP100 GPU micro-architecture, so things can be done at the instruction boundary instead of after the block level. How Pascal's Asynchronous compute improvement will compare to AMDs GCN/Ace units' asynchronous compute and AMDs asynchronous compute generational improvements will have to wait for the benchmarking software and enough DX12/Vulkan titles are on the market. Those DX12/Vulkan enabled gaming engines/Games will have to be on the market for at least a year before their DX12/Vulkan gaming engines are fully up to speed on these new graphics APIs.

Those P100 micro-architecture improvements(1) needs to be brought down into Nvidia's consumer(GP104) SKUs if they are to be of use for VR gaming. AMD's Polaris is coming for the consumer market first, with Vega being the big core in 2017, so AMD's latest GCN improvement to its already in use for a few GCN generations hardware based Asynchronous compute will be getting more tweaks with Pascal.

For the HPC/exascale markets, I see AMD getting even more CPU like functionality into their ACE units, which will force Nvidia to add more CPU like functionality to their HPC/exascale SKUs. Intel, Or any other CPU maker, will have to keep an eye on both AMD and Nvidia if they both keep adding more CPU like functionality on their HPC/Server/exascale GPU accelerator SKUs, and those SKUs technologies will find their way down into the consumer variants because the VR gaming makers will want as much CPU types of functionality on Nvidia's and AMD's consumer SKUs to cut down on that CPU to GPU latency problem for VR gaming!

U-Sam is throwing billions of government funding into its Exascale computing initiative, so that funding to Nvidia, AMD, Intel, others will result in some very well funded R&D that will work its way from the Exascale computing market for GPU accelerators funding/R&D down into the consumer GPU, and CPU, markets!


"Compute Preemption is another important new hardware and software feature added to GP100 that allows compute tasks to be preempted at instruction-level granularity, rather than thread block granularity as in prior Maxwell and Kepler GPU architectures. Compute Preemption prevents long-running applications from either monopolizing the system (preventing other applications from running) or timing out. Programmers no longer need to modify their long-running applications to play nicely with other GPU applications. With Compute Preemption in GP100, applications can run as long as needed to process large datasets or wait for various conditions to occur, while scheduled alongside other tasks. For example, both interactive graphics tasks and interactive debuggers can run in concert with long-running compute tasks."(1)


May 9, 2016 | 11:27 AM - Posted by Anonymous (not verified)

edit more tweaks with Pascal.
to: more tweaks with Polaris.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.