PCPer Live! GeForce GTX 1080 Live Stream with Tom Petersen (Now with free cards!)

Subject: General Tech, Graphics Cards | May 16, 2016 - 03:19 PM |
Tagged: video, tom petersen, pascal, nvidia, live, GTX 1080, gtx, GP104, geforce

Our review of the GeForce GTX 1080 is LIVE NOW, so be sure you check that out before today's live stream!!

Get yourself ready, it’s time for another GeForce GTX live stream hosted by PC Perspective’s Ryan Shrout and NVIDIA’s Tom Petersen. The general details about consumer Pascal and the GeForce GTX 1080 graphics card are already official and based on the traffic to our stories and the response on Twitter and YouTube, there is more than a little pent-up excitement. .

View Full Size

On hand to talk about the new graphics card, answer questions about technologies in the GeForce family including Pascal, SLI, VR, Simultaneous Multi-Projection and more will be Tom Petersen, well known in our community. We have done quite a few awesome live steams with Tom in the past, check them out if you haven't already.

View Full Size

NVIDIA GeForce GTX 1080 Live Stream

10am PT / 1pm ET - May 17th

PC Perspective Live! Page

Need a reminder? Join our live mailing list!

The event will take place Tuesday, May 17th at 1pm ET / 10am PT at http://www.pcper.com/live. There you’ll be able to catch the live video stream as well as use our chat room to interact with the audience, asking questions for me and Tom to answer live. 

Tom has a history of being both informative and entertaining and these live streaming events are always full of fun and technical information that you can get literally nowhere else. Previous streams have produced news as well – including statements on support for Adaptive Sync, release dates for displays and first-ever demos of triple display G-Sync functionality. You never know what’s going to happen or what will be said!

UPDATE! UPDATE! UPDATE! This just in fellow gamers: Tom is going to be providing two GeForce GTX 1080 graphics cards to give away during the live stream! We won't be able to ship them until availability hits at the end of May, but two lucky viewers of the live stream will be able to get their paws on the fastest graphics card we have ever tested!! Make sure you are scheduled to be here on May 17th at 10am PT / 1pm ET!!

View Full Size

Don't you want to win me??!?

If you have questions, please leave them in the comments below and we'll look through them just before the start of the live stream. Of course you'll be able to tweet us questions @pcper and we'll be keeping an eye on the IRC chat as well for more inquiries. What do you want to know and hear from Tom or I?

So join us! Set your calendar for this coming Tuesday at 1pm ET / 10am PT and be here at PC Perspective to catch it. If you are a forgetful type of person, sign up for the PC Perspective Live mailing list that we use exclusively to notify users of upcoming live streaming events including these types of specials and our regular live podcast. I promise, no spam will be had!

Video News

May 13, 2016 | 04:41 PM - Posted by RadioActiveLobster

I can tell you what probably won't be mentioned or brought up and it's the one thing we all want to know about.

I need answers to properly write my Jen-Hsun/Tom fan fiction.

On a serious note, hope you guys find a way to properly demo the VR tech and I really want to see more examples of the fixed surround issues. In games, not just demo environments.

May 17, 2016 | 01:03 PM - Posted by bhappy (not verified)

Jen-Hsun Huang mentioned open sourcing nvidia tech during the gtx 1080 launch event, Tom can you please clarify what nvidia's plans are exactly in regards to open sourcing nvidia tech?

May 13, 2016 | 04:54 PM - Posted by RushLimbaughisyourdaddy (not verified)

1. Nvidia says their new lineup supports asynchronous compute, but how? Is this done through hardware like AMD's ACE's? or is it done via software like maxwell?
2. Will Nvidia be supporting the VESA standard of adaptive sync?
3. Any word on a release date of the 1080TI and/or pascal Titan? or even a general ball park, like q1 or q2, or something like that.
4. What type of memory will 1080 TI use and how much will it have?
5. When the price drops happen later this year for the 1080, will the FE stay at $699?
6. Will the FE always stay at about $100 more than base price of AIB cards?
7. Are the team of people who designed the FE cooler Batman fans?
8. eGPU's are a really good thing, gives laptops longevity for gaming, why isn't NVidia more actively pursuing it's adoption? modularity means various parts can be upgraded when needed
9. will nvidia do another dual gpu card?
10. nvidia control panel is slow and cludgy, any plans for an overhaul anytime soon?

May 13, 2016 | 06:15 PM - Posted by Jann5s


May 13, 2016 | 08:57 PM - Posted by Paul EFT (not verified)

I'm just answering some for the sake of discussion, nothing more.

2: No, and no. Nvidia will stick to their own G-Sync. That one is a no brainer, really. They've invested the money in their r&d. The only way that I see them supporting it is if G-Sync flopped hard, which it hasn't.

3: Good luck getting that information when the 1080/1070 JUST launched. However, I'd be looking at AMDs 'Vega' release time frame to get a rough idea of when they would announce the Titan/Ti.

9: Not sure if there is a market for it, to be honest. Unless maybe it was targeted for VR.

May 13, 2016 | 10:29 PM - Posted by Anonymous (not verified)

2. Of course it won't happen now, but that doesn't mean it won't ever happen. Especially since Adaptive Sync will be in Televisions at some point, do you honestly think they won't do it then?

3. It can't hurt to ask, who knows, he might actually confirm a ball park figure.

9. There never was a good market for dual gpu cards, it's all about bragging rights and e-peen duels. It didn't make any sense at all to do a titan z, and yet they did it anyway. It's still pretty cool tech to read about.

May 13, 2016 | 10:53 PM - Posted by Cantelopia (not verified)

They should support G-Sync and FreeSync. If G-Sync is really $200 better then people will continue to buy G-Sync monitors, but those with FreeSync monitors won't be locked out from buying nV cards.

May 14, 2016 | 07:50 AM - Posted by Paul EFT (not verified)

But that's just the thing. They want people to buy THEIR G-Sync; that they actually make money on. Those little modules that provide are not for free - Asus/Acer whatever, buys them from nVidia; and sales are strong.

So while it would be really awesome for everyone if they adopted the standard; they will not. But maybe, just maybe, we will see a monitor that has both G-Sync and Freesync, however unlikely that may be. nVidia probably makes the manufacturers sign some sort of contract to prevent that from happening.

May 14, 2016 | 08:04 AM - Posted by Paul EFT (not verified)

2: I'm not sure why TVs would have Freesync, there is no need for it in there. But lets assume that it becomes the latest craze to buy TVs and use it for PC gaming. Then guess what? nVidia will have a G-Sync TV. It's that simple. (This is just my opinion. But nVidia goes where the money is)

3: Yeah, absolutely. I was just saying that they just want to talk about the 1080 and 1070 and hype the shit out of them. Then, when we're all hyped out, and starting to get a bit bored - they hit us again with Ti and Titan. Kind of a classic move.

9: Really not sure what they were hoping to do with Titan Z. I was basically just basing my answer on that card alone. It was really bad. Doubt they sold a lot. But it did get people talking, that's for sure. However, when someone said they bought a Titan Z, the people's response was usually "why the hell did you buy that piece of shit?".

May 14, 2016 | 01:26 PM - Posted by Imapimp (not verified)

2. Because future game consoles would have capability to run adaptive sync monitors/TV's

May 14, 2016 | 01:50 PM - Posted by Paul EFT (not verified)

Yup, forgot about that.

May 14, 2016 | 04:52 PM - Posted by Garrett Sandberg (not verified)

why would they need to if they lock thier frame rates anyways. There is literally no reason for them to do that.

May 14, 2016 | 05:11 PM - Posted by arbiter

Um don't think that will get it in tv's since consoles will likely dumb game down to keep certain FPS anyway cept in rare sections, no need for it ever.

May 14, 2016 | 10:29 PM - Posted by Imapimp (not verified)

Let's look at this analytically, rather than just shooting off at the mouth because AMD's hardware is in a console. Let's face it, you are very well known to spread mud and misinformation as often as possible about AMD.

CURRENT gen consoles, aka ps4 and xbone can barely run games at 30fps at 1080. So the statement that games on consoles are dumbed down to achieve 30fps is true, on CURRENT and older consoles.

The key here is VR on consoles. VR requires a beefier GPU in order to run it. We're not talking only watching videos and simple games where you stay in place. Games that could be on any of the 3 main platforms for VR - Oculus, Vive, and Sony's version.

What thsi means is that with a beefier GPU, there isn't any reason to "dumb down" a game on a console. better GPU combined with adaptive sync TV means console gamers now get the benefit of VRR.

Having adaptive sync in a TV and game console is a win win for everyone - AMD gets thier hardware put in more consoles, more consoles will get sold and more TV's get sold - and to top it off? Console gamers would get to experience buttery smoothness of VRR
So yea, there is a need for adaptive sync TV's and consoles.

May 15, 2016 | 12:01 AM - Posted by Anonymous (not verified)

arbiter is a spin-doctor for the hand that feeds, so what's your job this week, defending the client who pays by the post! Green from whoever will pay is arbiter's way!

VESA DP adaptive-sync should be in every display product! VESA is the display industry's standards Body!

May 16, 2016 | 06:24 AM - Posted by Anonymous (not verified)

1: good question, i'd like to hear more on that too before i buy a 1070 or 1080

2: Fastsync is the new kid in town.

3: so far we've heard HBM2 should be ready somewhere in the 1st half of 2017, rumours have it that amd might launch vega before that time which would be exciting but experience has it that nvidia will beat amd in such races.
hopefully amd's polaris reveal event at the end of this month will tell us more.

4: HBM2, 16 / 32GB

5: most likely yes, otherwise nvidia will get some heat for not doing so given the explanation for the 699 pricepoint.

6: related to 5 too much

7: obviously

8: $$$

9: no, because nvidia doesnt want you to SLI those, da bum tss

10: most likely, that why they had the b-team on drivers lately. they're catching up to amd level driver support now :P

May 13, 2016 | 11:42 PM - Posted by Anonymous (not verified)

Some good white paper reading! And maybe behind some pay-walled publications they will actually talk about and compare and contrast the actual hardware features on a point to point basis for asynchronous-compute features between the Pascal and Polaris. I'd expect that after the Polaris based SKUs are actually released that AMD will have more in depth white papers to be read! Remember to always take the time to filter out any marketing monkeys influence in any white papers by learning the generic computing sciences terminology for the technical terminology! for example:

SMT(simultaneous multithreading) for what Intel's marketing department has called/branded HyperThreading! Intel did not invent “HyperThreading”(SMT)!

UMA(Unified memory addressing) for what AMD's marketing department has called/branded hUMA(heterogeneous memory addressing)! UMA is being used by others, PowerVR/etc.

AMD has yet to list any whitepapers for their Polaris("GCN4") Micro-Architecture, but their is probably plenty of earlier GCN white papers on earlier GCN releases! There will be more info once Polaris is officially released!

Marketing the SCOURGE of the universe!


May 13, 2016 | 11:47 PM - Posted by Anonymous (not verified)

Edit: hUMA(heterogeneous memory addressing)
To: hUMA(heterogeneous Unified Memory Addressing)

Damn You Marketing and your sycophant obfuscation of the generic computing sciences terminologies!

May 14, 2016 | 12:00 AM - Posted by Anonymous (not verified)

"7. Are the team of people who designed the FE cooler Batman fans?"

Or stealth fighter fans, maybe they are worried about the card's RADAR cross section.

I like AMD's Plane Jane designs for their Fury/Nano cards but lose any unnecessary lights/lighted name plates, unless it's actually an indicator light!

GPUs shrouds should take up the least amount of space while still not adversely affect cooling!

May 14, 2016 | 12:27 AM - Posted by Anonymous (not verified)

P.S. Much better reporting will be at hand once AMD re-enters the HPC/Workstation market with its HPC/workstation and Exascale APUs on an interposer, as well as stand alone GPU accelerator markets! I'd expect that AMD's workstation/HPC/Exascale Zen/Vega and Zen/Navi(with a newer than HBM memory technology, and a modular GPU technology) APUs on an interposer will supplant most of AMD's GPU only SKUs for that accelerator market.

The websites for the HPC/Server/Exascale markets, as well as the professional trade journals(Pay walled unless you have a nearby college or university) are the best bet for more complete hardware information.

May 14, 2016 | 03:00 AM - Posted by Baldrick's Trousers (not verified)

You can look at the history of 700 and 900 series timings to find out when things like Titan and Ti variants are due.

Typically you're looking at a minimum gap of 6 months for the Titan, so that suggests we are looking at the end of 2016 or start of 2017.

It's a few months later than that, when we ought to see a cut down version of Titan bearing the Ti moniker. So, that would be spring 2017.

But no-one outside of Nvidia knows for sure and Nvidia ain't saying. Nvidia's only interest right now is selling as many 1070's and 1080's as possible.

May 14, 2016 | 04:59 AM - Posted by renz (not verified)

1) will developer bother to use it unless sponsored by certain gpu maker?
2) most likely no. That despite nvidia already habe the hardware to support both DP 1.3 and DP1.4
3) will not comment on future product
4) will not comment on future product
5) FE will always cost for $699 for the whole it's life.
6) not necessarily. AIB can actually charge more. Especially with their ultra super extreme overclocked edition version.
7) lol maybe
8) already did it with their recent drivers. Just no fancy names like AMD X-Connect and such.
9) does nvidia even need it? In one of tom peterson interview with our very own Ryan here tom already mention about disadvantage of dual gpu in term of cost.
10) IMO nvidia NVCP feels clean and easy to navigate. Load pretty fast too. Prefer to keep it that way than being modernize with useless animaton and such ala GFE.

May 14, 2016 | 01:33 PM - Posted by Imapimp (not verified)

10. It's clunky and slow and it takes a long time to load. Recall when the Radeon software debuted and how surprised Ryan and his crew were by how fast it loaded, how much nicer their new GUI is, and how easy everythign is layed out? not saying the nvidia control panel isn't layed out badly, but it looks like something from the regular Windows menu.

But the real issue here is how slow it takes to load and how slow it takes to navigate and how slow it takes to save or edit any setting. I've got both an AMD and an Nvidia card, comparing the control panel from each side, AMD has hands down the better setup.

May 15, 2016 | 01:36 AM - Posted by renz (not verified)

Are you sure you're not mistaking NVCP with GFE? When opening the control panel from dekstop nvcp just pop out instantly for me. Appying and changing the setting also fast. Personally i prefer the simplicity of NVCP than flashy look for GFE and the new radeon crimson layout.

May 15, 2016 | 11:56 AM - Posted by joshtekk4life (not verified)

right-click the desktop, chose nvcp, nice and slow and clunky

May 16, 2016 | 01:38 AM - Posted by renz (not verified)

nope it was faster and straight forward. even if nvidia intend to overhaul nvcp i hope they keep it simple. look like regular windows tab? i'd take that any day than the flashy look of GFE or Crimson.

May 14, 2016 | 01:35 PM - Posted by Imapimp (not verified)

Guys, thanks for all of the replies, but remember that even though we think we know the answer, we still want to hear it come from Tom's mouth.

May 15, 2016 | 05:44 PM - Posted by AnonymousE (not verified)

You forgot question 11....
Is that 8GB or is it really 7.5GB......LOL couldn't resist

May 16, 2016 | 08:01 PM - Posted by DaKrawnik

1. This again? *sigh* People think Async Compute is so great, but it's only great on paper. Hitman dev admitted it's hard to code for and we were lucky to get the 5-10% boost it offered. PS, The game sucks! Next...

2. Why? it's inferior to the alternative. Sure it's cheaper, but so are low end GPU's. Also, if it's open source, and it is, what is the incentive for it to even compare to a technology that's had a ton of money dumbed into it. Perhaps you've heard of an OS called Linux and a game system called Steam Box? Get the point? Good. Let's move on...

8. Meh...

10. It is slow, but how often are you going into it for it to matter that much to you?

May 13, 2016 | 05:07 PM - Posted by Randal_46

When can we get 1070 benchmarks?

May 13, 2016 | 05:14 PM - Posted by Anonymous (not verified)

PCPER = nvidia shill

May 13, 2016 | 05:28 PM - Posted by devsfan1830

yes, because they NEVER cover any AMD products...

May 13, 2016 | 05:45 PM - Posted by Garrett Sandberg (not verified)

they will review amd products once amd actually releases a product

May 13, 2016 | 05:45 PM - Posted by Garrett Sandberg (not verified)

they will review amd products once amd actually releases a product

May 13, 2016 | 09:53 PM - Posted by devsfan1830


May 13, 2016 | 08:03 PM - Posted by arbiter

Yea and Ryan doesn't own AMD stock.

May 14, 2016 | 12:46 AM - Posted by Anonymous (not verified)

This happens every release of the New products with Nvidia having those Professional HPC/workstation markets and other market revenues with which to get its products to market first! Zen/Vega and Zen/Navi will get AMD some of those Professional HPC/workstation market revenues, so maybe next year, or the year after, AMD can get its consumer products to market sooner!

AMD had better start getting those Polaris("GCN4.0" or whatever AMD offically calles/names it) WhitePapers published. There's Always the Hot Chips Symposium held every year in August on the Stanford University campus! Hot Chips is always the best for the CPU/APU/GPU and other processor presentations and whitepapers!

FanBoy GITs on all sides! Reducing technology discourse to the LCD(lowest common denominator) Mono-Brow level!

May 16, 2016 | 08:05 PM - Posted by DaKrawnik

It's because of them and TR that you got better frametimes... if you can call it that (see pcper Pro Duo review). You should be thanking them.

You should also be thanking nVIDIA for the GTX 970, because without it, you'd have been paying $549 (that was the MSRP) for a 290X.

May 13, 2016 | 05:26 PM - Posted by devsfan1830

Is the Surround display warping correction provided by Simultaneous Multi-Projection performed automatically at the driver level or is it something game developers need to program into their games?

In other words: Will games that support surround need to be updated to take advantage of this or does it automatically happen at thed river level?

Will users need/be able to to calibrate their surround displays and the warp correction to account for varying viewing angles relative to adjacent monitors? It is not likely that everyone monitor is positioned the same.

May 13, 2016 | 05:31 PM - Posted by Anonymous (not verified)

when are we getting a card priced lower than the GTX 950 with VP9/HEVC 10 bit video decode acceleration? or are they going to continue offering GM107 forever?

will it make sense anytime soon to use GDDR5X instead of GDDR5 for lower cost cards, like running GDDR5X at a 64bit bus instead of GDDR5 at 128bit?

1070 and 1080 have the same number of "ROPs" or not?

also, considering the popularity of the 970 on steam survey, have we seen a significant shift in the market? I mean, it used to be like the $200 maybe 300 and lower cards were king in terms of sales, now we have a lot of focus at around $300-400.

May 17, 2016 | 05:35 AM - Posted by Anonymous (not verified)

About VP9/HEVC 10bit, I'm assuming that it will eventually trickle down to that point. The GTX 1080 specs already indicate HW ENC/DEC for both of those formats.

It probably wasn't in GM204 at release due to them not being heavily adopted at development time. GM206 had it because it was made further down the line.

All the GP series should have it this time if the flagship already has it.

May 13, 2016 | 05:48 PM - Posted by Thedarklord

Q: Is NVIDIA restricting AIB partners from releasing overclocked versions of the "Founder Edition" GTX 1080 cards?

Q: Is the "Founder Edition" the new term for the "Reference Edition"

May 13, 2016 | 05:50 PM - Posted by Anonymous (not verified)

yes, precisely that.

May 14, 2016 | 10:44 AM - Posted by Anonymous (not verified)

The Founder's edition is: If you want 1080 first, pay $100 dollars more! Things like this happen when one company gets too much market-share. Nvidia has the money to get its products to market first and milk those markets for more $$$!

I see over at Phoronix that AMD's RGT is putting loads of efforts into their AMDGPU open-source drivers so maybe that will get AMD's Vega SKUs some more HPC/Workstation accelerator business, that HPC/workstation market uses Linux based OS extensively! Do you really think that GPUs are only about gaming use and gamers, the real money comes from more than just gaming uses for GPUs!

Nvidia has its GPU accelerator market to bring in some big Bucks, so it can afford to pull some Founder's edition market milking on is consumer SKU customers. AMD can not afford to do that currently!

May 16, 2016 | 01:48 AM - Posted by renz (not verified)

true those HPC use linux but they also want something that is reliable and fully working not just half working. just look how long it takes for the open source driver to be on par with official drivers from amd. and then look how nvidia performance vs amd cards in linux. the open source driver still cannot bring out the full potential of AMD card vs nvidia card. this alone will sway those HPC client towards nvidia solution. being open source doesn't mean a jack if you can't do the job. and this HPC client will not going to wait until all your stuff are ready. don't you find it strange? those S9510 can easily kick GK110 and Xeon Phi out of the windows in term of performance and yet most HPC client still waiting for KNL and Pascal.

May 17, 2016 | 08:53 AM - Posted by Anonymous (not verified)

History is a recursive function... Guess why communists would capitalists open their wallet as open source guru would private companies open their IP?

May 13, 2016 | 06:12 PM - Posted by Anonymous (not verified)

Oh fuck yes. This is AWESOME. Tom is the shit. Fucking bow to his greatness and suck his awesome sauce AMD Fanboy Faggots.

May 13, 2016 | 06:29 PM - Posted by DevilDawg (not verified)

"And Here's Your Sign!!!"

May 13, 2016 | 06:39 PM - Posted by YTech

Not sure if you noticed, but I heard Tom talk during the 1080 nvidia unveiling. I think Tom had some involvement with the new VR features :)

You know... The paid continued education he did in late winter. ;)

May 14, 2016 | 12:12 AM - Posted by Anonymous (not verified)

Mindless gaming GITs, gaming has a definite Goose Stepper Problem, it's those FPS games, and other shoot-Em-Up sorts of gaming that attracts the Floyd R. Turbo crowd!

WOW the FanBoy to FanBoy trash talk is getting heavy, Get a Room all you FanBoy GITs!

May 15, 2016 | 08:43 AM - Posted by YTech

Q1. How does the GTX 1080 compares to the GTX 980Ti in performance and quality? E.g. current games vs VR, dual display, gsync, async, etc.

Q2. Will there be a GTX 1080Ti or is the 1080 the best of the 10 series?

Q3. How will the 1080 compute with the CPU? Will processing be mostly on the GPU or will it still rely on the CPU?

Q4. Will the 1080 be able to make use of additional processing from an APU or a secondary GPU? And better optimized with a CPU?

Q5. Why didn't the 1080 come with HBM? Does the GPU architecture still needs work to be fully beneficial with HBM?

Q6. Why hasn't the reference design not change? The blower still seems to be the same as previous versions. Anything new on the PCB to help cool down the card? What about the angle of the blower. What about passive hybrid cooling?

Q7. Why go with the 10 series naming for the new cards? Why not change it with a single character? The enthusiast won't have issues identifying which card is the best for them, but everyone else who doesn't know much will believe this card won't be compatible with their 4K/UHD TVs or Monitors.

May 13, 2016 | 06:47 PM - Posted by Anonymous (not verified)

-Release date and approximate cost of Big Pascal.
-Will gameworks work with non Nvidia gpu's in the future or ever
-Will the 1060 be faster than a 390x
-Will they block multi-gpu with AMD cards in Directx 12
-Would they consider quarterly Steam like sales on their cards at deep deep discounts

May 13, 2016 | 08:05 PM - Posted by arbiter

1st and 3rd question he won't and can't answer as he can't talk about future products.

May 13, 2016 | 06:47 PM - Posted by Anonymous (not verified)

-Release date and approximate cost of Big Pascal.
-Will gameworks work with non Nvidia gpu's in the future or ever
-Will the 1060 be faster than a 390x
-Will they block multi-gpu with AMD cards in Directx 12
-Would they consider quarterly Steam like sales on their cards at deep deep discounts

May 14, 2016 | 05:06 AM - Posted by renz (not verified)

I don't think nvidia will block the gpu mixing in DX12. because instead of using API from nvidia or AMD developer will actually use the one that come up with DX12. but before talking about nvidia blocking the feature the concern is more about if there is developer really interested in using multi gpu for their games.

May 13, 2016 | 06:53 PM - Posted by jnev (not verified)

Too many nVidiots buying the hype.

May 13, 2016 | 07:02 PM - Posted by Zorkwiz

and buying their products, I wonder why... Must be because we like inferior products and wasting money I guess.

May 13, 2016 | 07:13 PM - Posted by Zorkwiz

C'mon guys, you know he won't say anything about Big Pascal, not even worth the time to ask.

My question would be, with VR providing more people with 3d capable displays, is there any chance of 3D Vision support being resurrected, or another set of 3D features implemented, to allow for HMD owners to easily play non-VR games in 3D?

Is the fact that Samsung only recently announced mass production for GDDR5X a concern for 1080 availability? How long does he expect the market to be supply constrained?

May 13, 2016 | 07:05 PM - Posted by JoeGuy00 (not verified)

Are they brute forcing Async and avoiding performance penalties through software or do they now benefit from Async like AMD does?

May 13, 2016 | 07:11 PM - Posted by Anonymous (not verified)

1. Will the 1080 Ti use HBM2?
2. How long will Nvidia support Maxwell in gameworks?
3. What are the current Pascal exclusive gameworks features?
4. Please explain this Pascal only supports 2 way SLI thing. Also HB SLI bridge... what is going on there?
5. What is the highest overclock you have seen on a 1080?

May 13, 2016 | 07:14 PM - Posted by Nick the Greek (not verified)

I think it is fair to say that the current implementation of G-sync does have some advantages over current free-sync / adaptive-sync implementations (frame doubling across ALL monitors, variable refresh rate in windowed mode etc).

So my question is why doesn't NVIDIA then support the adaptive-sync standard in addition, giving consumers the choice to either save some money on a cheaper monitor or opt for the arguably more polished implementation at a price premium? Or even move from an AMD gpu and free-sync monitor setup to an NVIDA gpu whilst retaining the monitor (or at least its variable refresh rate functionality)?

Thanks :)

May 13, 2016 | 07:55 PM - Posted by Will (not verified)

What was the breakthrough that allowed them to push 2.1GHz at 67c on air when OC'd.

May 13, 2016 | 09:15 PM - Posted by NBMTX (not verified)

the smaller process, probably... and a vapor chamber cooler on top. We also don't necessarily know ambient temps, fan speeds, whether that was a top 50% result or top .5% golden chip, etc, etc...
while it might be nice to not have to worry about the details and validity of such a statement... this official site exists: https://orderof10.com while the entirety of the 1080 and 1070's "performance" details exist in a single chart at http://www.geforce.com/hardware/10series/geforce-gtx-1080 ... because a 1070 page doesn't even exist despite being the number one GPU in use according to Steam.

May 13, 2016 | 09:16 PM - Posted by NBMTX (not verified)

*being the successor to the [number one GPU]

May 14, 2016 | 04:41 AM - Posted by Val

Hello Tom,

Simultaneous Multi-Projection, Single Pass Stereo, no/ low performance impact, how?

Does DSR feasible with Simultaneous Multi-Projection?

Tell us more about DisplayPort 1.3/1.4 "Ready".

Does VXGI improved with Pascal?

16nm and HBM/2 also means shorter card could be the future. What do you think about a shorter Founder Edition?

What can you tell us about the upcoming Shield?

Thanks Tom and PCPer.

May 13, 2016 | 10:58 PM - Posted by winsrp

Just 2 simple questions.

#1 will there be a version of the 1070 with GDDR5X
#2 will the 1070 use HB SLI bridge also for 2 way sli.

May 14, 2016 | 03:37 AM - Posted by arbiter

#1 probably stay gddr5 to keep price down. Buy time it matters next gen cards will be out so.
#2 it is same card as the 1080 so it should use same bridge.

May 14, 2016 | 12:12 AM - Posted by Redemption77 (not verified)

With the new SLI HB bridge incoming, When will Nvidia finally allow systems with SLI to use all available memory from both GPUs? Would this new higher bandwidth be capable of making that happen?

May 14, 2016 | 12:31 AM - Posted by Anonymous (not verified)

Always a pleasure watching PCper's livestreams with Tom as they're very informative and entertaining sources of information on Nvidia and GPU related tech in general.

Unlike the few times AMD reps were featured on PCper livestreams. They seemed so bitter with Nvidia and emphasized bashing their competitor and feeding AMD fanboy trolls more than promoting their own products.

May 14, 2016 | 12:48 AM - Posted by NamelessTed

1a. In terms of the simultaneous multi-projection. How much of this is software vs hardware? In that, is is something that took a lot of software R&D and would it be possible to apply similar techniques to previous generation of cards or is it heavily dependent on the architecture of the new chips?

1b. Is simultaneous multi-projection done heavily on the driver side or do developers need to support it by including new code into their games similar to Ansel?

1c. If it is hardware dependent, how difficult do you think it would be for AMD to implement something similar with their own cards.

1d. How much control does the use have over the multi-projection? Is there a setting in the driver to set the exact angle of the two side monitors or does it have a sliding scale or maybe 5 degree increments?

2. In terms of DX12, does nVidia have any plans to encourage and support game developers to implement SLI/multi-GPU in those games. It seems like many devs (including Epic) simply have no plans to support multi-GPU in any fashion in DX12 titles. Even with SLI scaling being uneven across different games, a lot of them at least support it. I can't imagine nVidia wants to see 50% or more of games not supporting multi-GPU configurations in any way.

May 14, 2016 | 11:02 AM - Posted by Anonymous (not verified)

simultaneous multi-projection is middleware/software with some driver support, so it's not just Nvidia, or even AMD, that could offer similar functionality. Third party middleware makers could offer their products, with Vulkan and DX12 allowing closer to the metal support and the overall driver model simplified under DX12 and Vulkan expect more features to be available for milti-screen gaming.

What you really have to worry about is restrictions under M$'s UWP forcing the entire gaming market to adopt a one size fits all cookie cutter approach with M$ limiting user choices in the PC/laptop gaming market to more of a console locked down type of experience! So it's best to hope for more Steam OS and Linux/Vulkan gaming support going forward to stop M$'s UWP end game master plan from ruining any open and innovative gaming done on any PCs/laptops running M$'s windows 10 OS!

AMD's RTG support for open source is what is going to become very important for Linux/Steam OS and Vulkan API gaming, and hopefully this will force Nvidia to be more open also! No one benefits from GPU, or OS, vendor lock-in!

May 15, 2016 | 01:56 AM - Posted by NamelessTed

I honestly don't see MS with UWP as a threat to PC gaming in general. Sure, they are going to have exclusives that they have published on their platform. But they aren't going to be changing the entire landscape with all the other options available to the market right now; including Steam, GoG, Origin, Blizzard and other self-publisher platforms. Do you really think EA is going to let Microsoft cut into their software sales without a huge fight?

On the other side, MS has already shown a willingness to update its platform in a somewhat timely manner. They have already added support for FreeSync and Gsync as well as allowing the user to disable v-sync. They have a lot of growing to do, but at least they seem willing.

May 15, 2016 | 10:45 AM - Posted by Anonymous (not verified)

Yes, BUT that windows 10 EULA gives M$ the rights to take back anything and everything at any time! M$'s end goal is to reduce PC/laptop gaming to a locked-down closed ecosystem with M$ in full control(XBONE style). That is why there has to be a Steam OS/Linux alternative with a full PC/Laptop gaming feature for feature software/gaming ecosystem for Linux/Vulkan gaming. The Linux OS gaming ecosystem only needs to reach 5% to 10% of the total PC/laptop OS market share to become a very viable alternative to any closed OS ecosystem.

The mobile market is going full on with the Linux kernel OS derived gaming ecosystem and with the Vulkan graphics API as the main driving force behind mobile gaming. There are plenty of GPU features being added to the mobile GPUs hardware to allow for things like dedicated in the GPU's hardware Ray Tracing Units(PowerVR GPU Options) that would be great if the major desktop/laptop GPU makers would adopt and greatly improve the lighting/shadow/AO affects for desktop gaming. So Steam OS/Linux/Vulkan will actually have a much larger development/innovation base with which to drive the most gaming software/Vulkan graphics ecosystem hardware/software innovations.

M$ has been a net anchor around the neck of any rapid gaming ecosystems software/hardware ecosystems innovations with its closed ecosystem goals, and its NON platform independent OS/Graphics APIs.

It's time for gaming to go totally platform independent with the Vulkan API having the widest development base across the entire range of devices for the most hardware/software driven innovation possible and the gaming OS/API/Software ecosystem under the control of one singular selfish/greedy interest.

M$ has no willingness without competition, that M$ willingness is only a defensive response to temporarily draw more unsuspecting customers into its master plan. There must be a complete feature for feature open ecosystem alternative to M$'s, or any others closed ecosystems, for gaming OSs/gaming APIs/games providers or PC/Laptop gaming will be closed up completely gaming console style. Don't let PC/Laptop gaming become XBONED under M$'s control.

May 15, 2016 | 10:56 AM - Posted by Anonymous (not verified)

Edit: ecosystem under the control of one singular selfish/greedy interest.

to: ecosystem NOT under the control of one singular selfish/greedy interest.

proof reading, proof reading, damn it! for paragraph 4.

May 15, 2016 | 04:33 PM - Posted by NamelessTed

I can see your point, but at the same time don't feel concerned about it from the viewpoint of the end-user. AMD did their thing with the Mantle API, it seems like we are seeing more games support OpenGL and pretty much everybody in the dev community has been raving about Vulkan for several months now.

The ecosystem is already adjusting for anything that MS is doing and might do in the future. That is important. I'm just excited to finally see a new chip from nVidia.

May 16, 2016 | 12:52 PM - Posted by Anonymous (not verified)

Just remember that M$ really has no concern with retaining all of its current windows 7/8.1 customers! M$ sees that profits/customer metric that Apple has, and the overall profits that Apple makes from its closed OS/Application ecosystem, more so on iOS than OSX, and Redmond will stop at nothing to get that higher Apple profits/customer ratio from Apple's smaller OSX base, and relatively larger iOS base. M$'s phone market is a disaster, so M$ will force its UWP(iOS model) onto all of its windows 10 customers, and profit even more so by eventually forcing its desktop application customers to re-purchase the same functionality under the UWP closed app/store ecosystem, at that 30% of the business action Apple style closed ecosystem business model!

M$ will however stop at nothing to obtain the very best spin doctors to provide the most convening plausible M$ deniability about M$'s true UWP end plan intentions of a completely closed OS/API ecosystem and monetization model, in order for M$ to ensnare as many locked-in windows 10 customers as possible! That EULA for windows 10 gives M$ too much in the way of latitude to do its will!

May 14, 2016 | 01:04 AM - Posted by Xander R (not verified)

1. Will 10 series exclusively support 2-way SLI?
2. What is the future of 3/4-way SLI going forward?
3. What practical benefits does the new HB-SLI bridge bring over existing bridges?
4. How soon can we expect to see g-sync displays that support DP 1.3 or 1.4 (4K @ 120+Hz) now that we finally have it in Pascal?
5. What games coming out this year are going to use simultaneous multi-projection and other VRWorks features?
6. Why call the reference cards Founders Edition and price it $100 above MSRP?

May 14, 2016 | 03:35 AM - Posted by arbiter

1/2. likely ti model will do 3/4way but mid range card its kinda pointless since 3 of the cards likely cost more then next one up and performance loss is more then if you just bought 2 of next one up.
question 3 i think they answered at the launch, it has i think 2x the banwidth of the first gen SLI

May 14, 2016 | 04:36 AM - Posted by Anonymous (not verified)

why not DisplayPort 1.4 ?
do DisplayPort 1.3 have 12bit/14 bit color 4k in the AMD card ?
HDR 1000nits or 2000nits ?
AMD have support in WRGB OLED lg ?

May 14, 2016 | 05:13 AM - Posted by renz (not verified)

Just a simple one: about the new bridge.

How will it impact SLI performance (scaling)? How about supporting more than 2 gpu? and the one that i want to know the most is why nvidia still needing bridge for SLI? What reason they not doing it in similar way to AMD XDMA CF?

May 15, 2016 | 08:53 AM - Posted by YTech

I think nVidia is keeping the PCIe lanes open for greater bandwidth when required. So the SLI bridge is so they don't have to rely on the remaining PCIe lanes.

This is my guess.

May 14, 2016 | 11:09 AM - Posted by trenter (not verified)

Anger your nvidia overlords and see this as the final Tom Peterson live stream. Pcper you will be judged according to the material published since the last livestream, anything but pure nvidia bias and you're surely doomed. Luckily for you I have been informed that your nvidia bias is pure, has sustained and is fully intact.l Proceed with caution!

May 14, 2016 | 11:54 AM - Posted by snook

ask him about the 2x performance over titan (yea, yea...VR. still smoke and mirrors). knowing that the benchmarks you've done show 30-40% max. did they hire AMD marketing?

May 14, 2016 | 12:36 PM - Posted by Grabass (not verified)

Will a single GTX 1080 play Grand theft auto 5 maxed out at 4k?

May 14, 2016 | 01:22 PM - Posted by sawe (not verified)

1. Many game developers tweet that dx12 is hard. What can Nvidia do, and what have you done to lower the optimization time on developers ?

2. Now that SLI -work is shifted to developers responsibility, what are the guarantees that it will be made, because it is so marginal group of gamers ? Will move to dx12 kill the multi-gpu completely ?

May 14, 2016 | 05:54 PM - Posted by Anonymous (not verified)

Game developers(script kiddies mostly) are not the systems programmers that are writing the game engines and gaming SDKs! So it's not really that hard when the properly written gaming Engine/Gaming SDKs and IDEs are there with the software tool chains to abstract away the hard parts of the game's development process for the majority of games developers. There will even be OpenGL to Vulkan wrapper/conversion layers to help with the porting process from OpenGL to Vulkan API graphics calls so some legacy games cane be gradually ported over to the new Vulkan graphics APIs, or even some legacy games that will never be ported over from OpenGL to Vulkan to still obtain some improvements via the OpenGL to Vulkan wrapper code/translation layer performance advantages of using the Vulkan API calls via wrapper code/translation layers made for porting code. Most of a the games' really hard to program parts are done by the qualified systems programmers who have the necessary skills to do the hard parts of the coding and other highly technical gaming/gaming engine programming tasks!

Both DX12 and Vulkan have in their respective graphics APIs the newer methods of using multiple GPUs of many different GPU manufacturers at the same time for graphics workloads. That new multi-adaptor technology in the newer graphics APIs will allow for better GPU utilization for any GPU usage.

May 14, 2016 | 06:21 PM - Posted by Idiot (not verified)

As many details on Async Compute as possible, please.

May 14, 2016 | 07:19 PM - Posted by Anonymous (not verified)

Read this Nvidia Whitepaper! You are also going to have to read any AMD Async Compute Whitepapers, and the whitepapers that include the new Polaris Async Compute improvments over the previous GCN generations also, whenever the Polaris whitepaper is available! Better hope that GP100 async improvments makes it into the GP104 SKUs!


May 15, 2016 | 02:06 PM - Posted by Idiot (not verified)

You seem to be a smart guy. Is Pascal better at Async Compute than Fury X? I am asking because I am sure I won't understand anything in that whitepaper. But thanks anyway.

May 15, 2016 | 09:36 PM - Posted by Anonymous (not verified)

It's going to be a year before that can be answered, as the benchmarking software needs to catch up to the Newer DX12/Vulkan APIs, as well as the Pascal/Polaris hardware. As far as Async-compute on the Fury X, there should already be AMD whitepapers on GCN("1.2", or GCN "Gen3") and earlier generations of GCN at AMD's website! It's going to also take a while before users know just what GP100 features are brought down into the GP104 SKUs from Nvidia, or if GP104 gets any extra consumer/gaming oriented features that GP100 may lack.

Watch out for any DX11 oriented benchmarks used to come to any conclusions about both the Nvidia Pascal based 1080/1070, or AMD Polaris offerings, as the async-compute features from both Nvidia and AMD need to wait for the gaming engine/games to begin to become DX12/Vulkan optimized for these newer graphics APIs, and that takes at least a year, or more. The Vulkan API is getting weekly updates and even some vendor specific extensions so I'd keep up with the Linux news at Phoronix, and other Linux/Vulkan sources.

For sure Nvidia is improving it's async-compute abilities the VR folks are clamoring for Async-Compute for its VR gaming latency issues reduction. The HPC/workstation/server market where Nvidia currently has market lead in GPU accelerators is very interested in having more GPU async-compute features, and hopefully AMD will be getting back into both the server/HPC/Workstation CPU, as well as the GPU accelerator market. AMD, and other GPU makers, PC/Laptop and mobile, are gradually migrating more of the traditionally CPU like features onto their respective GPU micro-architectures so watch for more async-compute functionality on future GPU SKUs.

Those AMD HPC/server/Workstation Zen/Vega and Zen/Navi(Modular/scalable GPU designs) are going to lead to some high end Zen/Vega/Navi/Polaris based gaming APUs for laptops/PCs in the future, and that includes HBM2 also! So keep up with the silicon interposer technology and interposer based APU and GPU systems from AMD, and SOCs on an interposer systems from others.

One other thing to note for windows 10/UWP gaming is that windows based gaming is going to become more locked down, and there will need to be more support for Steam OS/Linux/Vulkan gaming. Luckily most all of the Table/phone/mobile gaming market is going to be All Vulkan API based, with more OpenGL support continuing for legacy reasons and support. So Vulkan will have the most cross platform support from the Linux Kernel/Linux OS markets from IOT/Phone/tablets up to PC/Laptops and workstations/supercomputers! Expect a lot of the OpenGL games to be using some form of OpenGL to Vulkan conversion layers/API wrapper code calls, and other code porting tools to allow for even some of the older games to run on the Vulkan API if there is any advantage in doing so.

The whitepapers will be read, understood, and explained by some reporters, mostly for pay-walled publications and AMD's “GCN 4/GCN 1.3” Polaris whitepapers are still not available so maybe you can go to a nearby college or university library and read the professional computing sciences trade journals, The Microprocessor Report is a long time professional publication for the CPU market, and some of the GPU market also, because of the GPU usage in the Server/HPC/workstation markets. There other professional trade journals that try and explain things for the layman to understand, as some of the publications need to inform the MBA types about the technologies potential. You are still going to have to wait for more information to become available from both Nvidia and AMD, so that Fury X async-compute question may have to wait for these, and the other reasons listed above.

May 14, 2016 | 06:53 PM - Posted by Decoyboy (not verified)

I'm going to keep asking this question to tom every time he is on PCPer.

Can you do Gsync + ULMB mode yet? Also Gsync HDR monitors in the works with IPS/OLED?

May 14, 2016 | 07:44 PM - Posted by terminal addict

Q: What is the rough timeline for Pascal-based mobile GPUs.

May 14, 2016 | 08:30 PM - Posted by Anonymous (not verified)

Far off considering the rebranding going on from both Nvidia and AMD, The Laptop OEMs, low-ballers that they are, will probably go with the rebrands to save money, and only a very few of the high cost gaming laptop SKUs will get Nvidia's Pascal, or AMD's Polaris, mobile SKUs at first.

AMD has a better chance to gain some mobile/laptop design wins with its Polaris mobile SKUs being more affordable, but watch out for all that rebranding obfuscation from both the Red and Green teams, and the laptop OEMs!

I'd like to see more Linux OS based laptop OEMs start to offer Polaris mobile GPU and Zen/Polaris APU/GPU options, and even the Nvidia die-hards may benifit from more laptop/mobile GPU competition to bring Nvidia's Pascal based mobile SKU pricing down!

May 16, 2016 | 04:49 PM - Posted by Anonymous (not verified)

Here is a post from anandtech(Article about AMD M400-Ser, re-brands mostly mobile)listing some of Nvidia's re-brands. So the lists of AMD's multi-year re-brands there are plenty also! Both Nvidia and AMD with the re-brands! More re-brands on OEM Laptop SKUs! I think it's time for some serious backlash against the laptop OEMs regarding their re-branding/naming obfuscation, and maybe some consumer FTC complaints about the practice! But laptop OEMs are pretty good at not listing/not being required to list the proper amount of product data so consumers can even attempt to make an informed decision when trying to shop for a laptop! No wonder the OEM PC/Laptop market continues to go downhill!

"GeForce GT 730

AKA GT 630

AKA GT 530

AKA GT 430

All using GF108, rebranded over four years ..."

May 14, 2016 | 10:42 PM - Posted by Anymoose (not verified)

PCPer Live! GeForce GTX 1080 Live Giveaway Stream with Tom Petersen

May 15, 2016 | 12:16 AM - Posted by thejustin84

What will be the benefits of the SLI HB bridge(aside from the obvious being more bandwidth)? I've always thought about using a dual card configuration, but often the advantages of said configuration varies too much from game to game. Will we see an increase in the amount of titles showing good scaling going into the future, or is it more in the hands of the game developers at the end of the day?

May 15, 2016 | 01:11 AM - Posted by greg reavis (not verified)

Why did you guys decide to stick with the SLI connectors vs doing it completely over PCI-e?

Is it bandwidth, latency, or a combination of that and other things?

May 16, 2016 | 12:53 PM - Posted by Anonymous (not verified)

Short answer are the best: to sell SLI bridges.

Guess why nVidia is still selling its SLI bridge since video cards moved from PCI-E 2.0 to PCI-E 3.0?

May 15, 2016 | 12:58 PM - Posted by thelude

With the inherite performance bump in changing die node from 28nm to 16nm, what is Nvidia doing on architecture side to improve performances? My the personal view is that with the die at 28nm for a couple of years, the engineer had to focus on the architecture side to gain performances instead of relying on die shrinkage to gain performances.


May 15, 2016 | 07:09 PM - Posted by Anonymous (not verified)

The shrinking of the die won't imply a performance bump any more. Silicon transistors leaks more than ever and reducing the die only involves a better yield per wafer but the production economy is way counterbalanced by the R&D cost.

Consequently, skrinking the die won't be profitable any more and I agree with the fact that chip producers should work more on the design level.

May 16, 2016 | 06:09 PM - Posted by Anonymous (not verified)

"FinFET Scaling Reaches Thermal Limit"


May 16, 2016 | 06:25 PM - Posted by Anonymous (not verified)

Thanks! ;-)

May 16, 2016 | 06:36 PM - Posted by Anonymous (not verified)

P.S. smaller modular GPU, CPU/other dies spread across larger interposer packages for more processing power by adding increasing numbers of smaller modular die units! These smaller modular die units would have higher wafer/chip yields using the smaller dies, instead of the bigger dies with their lower wafer/chip yields, and higher defect rate losses relative to more and smaller dies.

Maybe future interposer based systems on an silicon interposer module may have chip dies added top and bottom on cards that allow for cooling from both sides of the package. Some sort of support structure could be provided to add strength to the interposer package's silicon to prevent cracking but it looks like 10nm may be the economic limit for some processors. It looks like for now that going with larger interposer packages populated with smaller modular GPU(large dies traded for many smaller modular dies) and other processor dies and HBM is going to be the more economical method very soon. No more building smaller, but building out and upward(stacking) on a larger silicon interposer package.

May 16, 2016 | 07:16 PM - Posted by Anonymous (not verified)

Spacing cores or units won't make any miracle since the heat density coming from current leaks prevents transistors from working correctly (Joule and electromagnetic effects).

Once more, the space lost for cooling transistors can't be allocated to pack more transistors.

From my POV, the core count race is a waste of transistors because the core use doesn't scale well with the increase of cores.

To boost performances at the best cost, chip manufacturers should allocate each transistor in an optimum design that makes a logic for programmers. In other words, allocates transistors for expensive programmed functions used very often.

May 16, 2016 | 08:53 PM - Posted by Anonymous (not verified)

I'm Not talking about any more spacing(pitch)/process node shrinks on any monolithic die, things will have to go larger so what I meant was maybe stopping at 14nm, and just getting larger and larger silicon interposer sizes and placing more/smaller 14nm fabbed dies/modules on larger and larger interposer packages. So basically staying with 14nm longer without any smaller process node shrinks and going with smaller modular/individual dies all wired up via the interposer's silicon substrate to get more processing power for future designs! And the different/separate Dies can be wired up via the silicon interposer's substrate the same as if they where made on a single large monolithic Die, look at what the silicon interposer is made of the very same silicon that the dies stacked on it are made of, so all the processor dies could use TSVs and micro-bumps to connect to an interposers 10s of thousands+ of etched traces and to each other.

So just stay with 14nm and go with smaller GPU dies, and other dies(CPU, HBM, whatever) and wire them up via the silicon interposer in larger and larger numbers, and it would be very easy to make separate smaller modular GPU dies and wire them up to each other with thousands and thousands of traces via the interposer's silicon and abstract away at the hardware level the fact that the GPU is made up of a bunch of smaller and separate modular GPU dies, no cross-fire or SLI needed, the interposer based Navi GPU scalable designs may just do just that!

The current GPU's on an interposer GPU die/HBM Dies parings currently use passive interposer based designs, with the interposer just being etched with the connection traces, but future active interposer designs could very well play host to not just the traces, but to whole coherent connection fabric circuity and memory buffer circuity or the circuits to abstract away in hardware the fact that the interposer based GPU, or APU, is in fact made up of many smaller GPU modular units/separate dies and could be made to look like one big GPU/APU unit to any software/graphics API/or even the OS!

If you look at how GPUs are made they are already made up of modular units in the first place, these units are all on a single monolithic die, so slicing them down into modular dies/units and splicing them back together via and interposer will not be too hard while still making them look to the software like they are one GPU. So instead of slicing them small you fabricate them small and individual in the first place for better die/wafer yields and wire them up via the interposer package. Not a hard thing to engineer!

May 17, 2016 | 02:00 AM - Posted by Anonymous (not verified)

Still I don't see your point for boosting performances... with a modular design. :-|

May 15, 2016 | 12:58 PM - Posted by thelude

With the inherite performance bump in changing die node from 28nm to 16nm, what is Nvidia doing on architecture side to improve performances? My the personal view is that with the die at 28nm for a couple of years, the engineer had to focus on the architecture side to gain performances instead of relying on die shrinkage to gain performances.


May 15, 2016 | 12:59 PM - Posted by Anonymous (not verified)

Tom, were you drunk during the Nvidia GPU keynote?
Will you get fired for your behaviour?

May 15, 2016 | 06:11 PM - Posted by Anonymous (not verified)

Will there be support for mixed resolution surround?

May 15, 2016 | 07:25 PM - Posted by btdog

I hate that I have to work then.

May 16, 2016 | 12:30 AM - Posted by khanmein

fully support H.265? y didn't add 8GB VRAM on previous GTX 980?

May 16, 2016 | 01:52 AM - Posted by renz (not verified)

same reason why 780TI 6GB does not exist despite there is 780 6GB.

May 16, 2016 | 02:28 AM - Posted by Anonymous (not verified)

Ask him about what Jen Hsun told him backstage and whether he was drunk that day.
Also aboit memory bandwidth and possible bottlenecking and higher gddr5x speeds not being used.

May 16, 2016 | 04:12 AM - Posted by Anonymous (not verified)

Ask him about what makes him smile more: the raging nVidiot haters number or the nVidia sales number. :o)

May 16, 2016 | 08:59 AM - Posted by JB (not verified)

Why does nVidia try to prevent their cards from running in a virtualized environment? (google "gpu passthrough code 43 error"). There are users who want to do just that, take advantage of the capabilities of modern Intel hardware to game in Windows without having to dual boot.

I know, it's a small subset of linux users, themselves a small subset of all nVidia customers, but that's just the point: why persist in annoying even a small fraction of your customers? What's there to gain?

May 16, 2016 | 09:49 PM - Posted by Anonymous (not verified)

Are those cards consumer cards, and if so, maybe it's just Nvidia doing/pulling an Intel, and not wanting users to use GPU virtualization features in the consumer variants. Maybe Nvidia wants that ability reserved for its pro variants that cost more money. That's classic market segmentation as practiced by a large interest with a large market share.

If you are talking about using Xen/KVM/other Linux based VM software/hypervisor packages and being able to use them on any CPU with the hardware ability to run VM software facilities like Xen/KVM then it sounds more like a GPU driver problem/pull request for the Linux/Kernel or respective VM software facility maker’s code maintainers!

Please note that modern Firepro/Quadro SKUs are getting the same CPU like virtualization functionality into their respective GPUs micro-architectures, and even some PowervVR variants are getting the same virtualization technology to allow multiple OS, and software, instances to obtain virtualized slices of the GPU to have all to themselves!

That passthrough code 43 error looks like a problem for windows 10/earlier and Linux based VM software. So a driver/other bug!

May 16, 2016 | 09:33 AM - Posted by thelude

Will the new High Bandwidth SLI bridge also work on older generation cards? Or is it exclusively for Pascal.


May 16, 2016 | 12:18 PM - Posted by Anonymous (not verified)


Will nVidia make an offer to buy the CPU business (without foundries) of AMD? It could be an opportunity to enter the processor market for cheap while AMD overneed cash to survive for the next 6 months.


May 16, 2016 | 01:14 PM - Posted by YourName#324897234 (not verified)

One question:
1. Will 'Simultaneous Multi-Projection' be added to Maxwell / Kepler? Time frame ?

PS: It is likely to be only driver level support, and i am interested if current users will be able to benefit from it. Things like VR boost, Surround monitor, mixed resolutions, are relevant now. Those 970, 980 and SLI's are not going anywhere in a hurry. While the WANT factor is big for Pascal, the NEED factor is not, especially if you game on 1080p or 1440p.

I understand that it is bad business model, but so is loosing loyalty of customers. I may just ignore those 15 FPS in the benchmark next time, and vote with my wallet more wisely.

May 16, 2016 | 02:51 PM - Posted by snook

insight: this card is only ~20%-25% max faster than a 980Ti in all but VR it seems. again, ask why they went with AMD marketing.

May 16, 2016 | 02:58 PM - Posted by GeekAvenger (not verified)

The A&E firm I work for uses Oculus Rifts to do customer walk throughs of buildings we have designed in Revit. Currently we are using the same towers we design on (i.e. Quadro GPUs). Do you think it makes sense to have purpose built VR Machines for demos? Quadro doesn't push VR in any of it's marketing. So in short how does the VR performance of a 1080 compare to something like an M2000 or M4000 series card?

May 16, 2016 | 03:25 PM - Posted by Dato

1. When Nvidia launch G-sync 2 where both ULMB and G-sync can work at the same time?
2. Does Simultaneous Multi-Projection work with different curved screens 21:9/16:9 and two monitor setup?

May 16, 2016 | 03:28 PM - Posted by GPeterson (not verified)

I still don't understand the pricing. Is $699 the new standard price of the GTX1080? That's $50 more than 980ti. Is this the way of Nvidia's new pricing structure?

May 16, 2016 | 03:37 PM - Posted by vicky (not verified)

How is Async compute gonna be supported this time?

same as last time, driver support?
or is there a new hardware block that we don't know

May 16, 2016 | 04:03 PM - Posted by Logun

Not sure how to word this properly
1080 uses GDDR5X - What is the anticipated impact of HBM to your current architecture and plans? Given the sizable increase of the performance of the 1080 over the 980, do you anticipate even greater gains with the adoption of HBM2?

May 16, 2016 | 04:17 PM - Posted by slugbug55

I should be home unless something changes before then.

May 16, 2016 | 04:29 PM - Posted by Anonymous (not verified)

" fastest graphics card we have ever tested!! "

So this is faster than the Radeon Pro Duo??? Or did he mean fastest GPU?

May 16, 2016 | 04:51 PM - Posted by Mike1080 (not verified)

Can you test and supply us with some ethereum mining hashrates?

May 16, 2016 | 05:16 PM - Posted by Prodeous

I'd like to know if Nvidia is putting any resources to OpenCL and better driver support for the 10 series.. Performance is lacking way behind Cuda...

May 16, 2016 | 06:53 PM - Posted by tripkj (not verified)

ive failed on orderof10, i think ill be lucky here (>‿◠)✌

May 16, 2016 | 07:28 PM - Posted by YTech

I've done good so far. The puzzles that is. Winning the GTX 1080 would be sweet. I don't mind the wait as it would arrive just in time everything settles down (hopefully).

I do wonder how well it would perform on a 3D HDTV while displaying in stereoscopic. I had to install some foreign drivers for my GTX 285M to work.

May 16, 2016 | 07:53 PM - Posted by YTech

(Double post - Drupal Error Page)

May 16, 2016 | 07:36 PM - Posted by Arkamwest

Is Asynchronous Computing supported by the hardware or is just a new driver?
How much performance gain you have coming from a GTX 780 to 1080?

May 16, 2016 | 08:58 PM - Posted by Kevin (not verified)

I'd be interested in info on mid range cards too like the 1060, but yea I guess they can't talk about that yet.

Very happy with my GTX 960 4GB card. There is not much this card cannot do at 1080p.

May 16, 2016 | 10:26 PM - Posted by Raguel

I have two questions regarding simultaneous multi-projection.

Q1) How was the decision to only use four view port projections per eye (8 total) while approximating stereo VR arrived at when, according to the presentation, the cards support up to 16 view ports. It seems that 4 view ports will offer only a relatively crude approximation of the elliptical distortion of the lenses.
Additionally it results in the merge boundaries existing in a cross pattern bisecting both the vertical and horizontal axis of the observer.
Q1.a) Was this driven by a limitation that the view ports must use the same dimensions, resolution, and cover angles? If so it feels like a 2x3 wide aspect arrangement would offer a superior approximation. And depending on how it is supported opens the possibility of limited foveal rendering where the outter 4 view ports could be rendered with reduced quality (e.g. disable AA).
[_][_][_] or [x][_][x]
[_][_][_] or [x][_][x]

Q1.b) If each view port can have its own resolution then an even better approach would be a 2, 3, 2 pattern.
[___][___] or [xxx][xxx]
[_][__][_] or [x][__][x]
[___][___] or [xxx][xxx]

Q1.c) Perhaps it was a design decision to simplify implementation across a range of hardware, implying that lower end SKUs might not have support for 16 view ports but will at least retain support for 8? If this is the case could we possibly expect a driver update to support finer approximation patterns for different cards in the future?

Q2) Related to the above. My current surround setup if a 43" 4K monitor flanked by a pair of 24" 1920x1200 monitors in portrait, will mixed orientations/resolutions be supported?

May 17, 2016 | 01:42 AM - Posted by GreenMoniker

For Tom,

Why did Nvidia decide to release the founders addition and what does it have to justify the 100 dollar price premium.

Thank You for your time.

P.S. The reveal was awesome to watch and the technical difficulties were amusing. I hope you got to see the twitch chat log of people chanting TOM

May 17, 2016 | 02:56 AM - Posted by steen (not verified)

With 3 displays & simultaneous multi-projection for perspective correction on the surround displays: How do you account for what angle the left & right displays are relative to the centre display? Is angle of perspective correction a tweakable setting?

May 17, 2016 | 03:45 AM - Posted by Jim Bryant (not verified)

Will the old SLI bridges still work for 3-way and 4-way? If so, what will the impact on performance be if the new bridge is using teamed card edges?

May 17, 2016 | 04:24 AM - Posted by Todor Belchev (not verified)

I`m really disappointed at lack of good monitors at consumer market. Now pascal supporting HDR, WCG and high refresh rates with DP. Do you have any information at upcoming monitors witch wouldn`t cost 5000$, so we can fully enjoy new HDR content, or we have to buy TV(something like SUHD) and play on TV despite the high input of tvs.

May 17, 2016 | 05:00 AM - Posted by stigbeater (not verified)

question for tom so is this gpu just a die shrink of a maxwell gpu with high clocks?

May 17, 2016 | 07:35 AM - Posted by Anonymous (not verified)

So many nvidia shills and terfs in the comments, is it an easy way to make money?

May 17, 2016 | 07:58 AM - Posted by mikesheadroom

Like another commenter asked, I am interested to know when we can start seeing displays that go beyond 4K/60 using the new display port standard?

May 17, 2016 | 09:15 AM - Posted by Randall-one

Q. Can you verify that the intro demo of the Founders 1080 was done with a stock card like those we can order on the 27th.

Q. was the intro demo overclocking accomplished with available software?

Q. did the intro demo card only have an 8-pin power connector?

May 17, 2016 | 12:24 PM - Posted by Gobo

Tom, how do you expect overclocking utilities will address the changes to overclocking in GPU boost 3.0

May 17, 2016 | 12:45 PM - Posted by NamelessTed

It would also be nice if Tom could try to justify the Founder's Edition marketing and pricing and causing confusing without properly explaining them. I feel like announcing a release date and an MSRP of $600 but only releasing a $700 Founder's on that date is deceptive. When will $600 cards actually be available?

May 17, 2016 | 12:46 PM - Posted by aridren (not verified)

When will the 1060 or 1060 ti series cards be launched? I know a precise date cannot be disclosed but as an aproximation it will be in the next months 'soon' or after christmas 'soon'?

May 17, 2016 | 12:57 PM - Posted by aridren (not verified)

Tom, how is the asynchronous compute managed in hardware differently in Pascal from Maxwell and why is it more efficient? Will Nvidia provide asynchronous compute support for Maxwell cards as well through drivers?

Thank you!

May 17, 2016 | 12:57 PM - Posted by stevex291 (not verified)

To upgrade from my 980Ti or not..that is the question.

May 17, 2016 | 01:02 PM - Posted by Anonymous (not verified)

I want that card soooooooo bad :')

May 17, 2016 | 01:09 PM - Posted by djGrrr

Can SLI-HB be accomplished using 2 older SLI bridges, or does it require a new SLI-HB bridge?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.