Subject: Graphics Cards, Mobile | June 4, 2016 - 04:28 PM | Scott Michaud
Tagged: nvidia, GTX 1080, gtx 1070, pascal
Normally, when a GPU developer creates a laptop SKU, they re-use the desktop branding, add an M at the end, but release a very different, significantly slower part. This changed with the GTX 980, as NVIDIA cherry-picked the heck out of their production to find chips that could operate full-speed at a lower-than-usual TDP. With less power (and cooling) to consider, they were sent to laptop manufacturers and integrated into high-end designs.
They still had the lower-performance 980M, though, which was confusing for potential customers. You needed to know to avoid the M, and trust the product page to correctly add the M as applicable. This is where PCGamer's scoop comes into play. Apparently, NVIDIA will stop “producing separate M versions of its desktop GPUs”. Also, they are expected to release their 10-series desktop GPUs to their laptop partners by late-summer.
Last time, NVIDIA took almost a year to bin enough GPUs for laptops. While we don't know how long they've been stockpiling GP104 GPUs, this, if the rumors are true, would just be about three months of lead-time for the desktop SKUs. Granted, Pascal is significantly more efficient than Maxwell. Maxwell tried to squeeze extra performance out of an existing fabrication node, while Pascal is a relatively smaller chip, benefiting from the industry's double-shrink in process technology. It's possible that they didn't need to drop the TDP threshold that far below what they accept for desktop.
For us desktop users, this also suggests that NVIDIA is not having too many issues with yield in general. I mean, if they were expecting GPU shortages to persist for months, you wouldn't expect that they would cut their supply further with a new product segment, particularly one that should require both decent volume and well-binned chips. This, again, might mean that we'll see desktop GPUs restock soon. Either that, or NVIDIA significantly miscalculated demand for new GPUs, and they needed to fulfill partner obligations that they made before reality struck.
Call it wishful thinking, but I don't think it's the latter.
Subject: Graphics Cards | June 4, 2016 - 02:51 PM | Scott Michaud
Tagged: nvidia, GTX 1080, zotac
(Most of) NVIDIA's AIB partners have been flooding out announcements of custom GTX 1080 designs. Looking over them, it seems like they fall into two camps: one believes 1x eight-pin PCIe power is sufficient for the GP104, and the other thinks that 1x eight-pin + 1x six-pin PCIe could be useful.
ZOTAC, on the other hand, seems to believe that both are underestimating. Excluding the Founders Edition, both of their GTX 1080 designs utilize 2x eight-pin PCIe connectors. This gives their cards a theoretical maximum of 375W, versus 225W of the Founders Edition. At this point, considering the Founders Edition can reach 2.1 GHz with good enough binning, I'm guessing that it's either there simply because they can, or they just didn't want to alter their existing design. Not that, if you only have 6-pin PCIe connectors on your power supply, ZOTAC provides the dual-six-to-eight-pin adapters in the box.
The two SKUs that they are releasing, again, apart from the Founders Edition, vary by their heatsink. The ZOTAC GeForce GTX 1080 AMP has a dual-fan IceStorm cooler, while the ZOTAC GeForce GTX 1080 AMP Extreme has a triple-fan IceStorm cooler. (IceStorm is the brand name of ZOTAC's custom cooler.) Other than the 2x 8-pin PCIe connector, there's not much else to mention. ZOTAC has not settled on a default base, boost, or memory clock, and you will probably be overclocking it yourself (either manually or by using an automatic overclocker) anyway. Both cards have a back plate, if that's something you're interested in.
Once again, no pricing or availability. It shouldn't be too long, though.
Subject: Graphics Cards | June 4, 2016 - 01:53 PM | Scott Michaud
Tagged: nvidia, msi, hydro gfx, GTX 1080, corsair
Last week, we wrote about the MSI GeForce GTX 1080 SEA HAWK. This design took their AERO cooler and integrated a Corsair self-contained water cooler into it. In response, Corsair, not to be outdone by MSI's Corsair partnership, partnered with MSI to release their own graphics card, the GeForce GTX 1080 version of the Corsair Hydro GFX.
The MSI SEA HAWK
Basically, like we saw with their previous Hydro GFX card, Corsair and MSI are each selling basically the same graphics card, just with their own branding. It sounds like the two cards, MSI's SEA HAWK and Corsair's Hydro GFX, differ slightly in terms of LED lighting, but it might just be a mismatch between Tom's Hardware's Computex coverage and MSI's product page. Otherwise, I would guess that the choice between these SKUs comes down to the company that you trust most for support, which I believe both Corsair and MSI hold a good reputation for, and the current price at the specific retailer you choose. Maybe some slight variation in clock rate?
The Corsair Hydro GFX at Computex
(Image Credit: Tom's Hardware)
For the record, both cards use a single, eight-pin PCIe power connector, rather than an eight-pin and a six-pin as we've seen a few, high-end boards opt for.
No idea about pricing or availability. Corsair's page still refers to the GTX 980 Ti model.
Subject: Graphics Cards | June 4, 2016 - 12:35 PM | Sebastian Peak
Tagged: revving, report, nvidia, GTX 1080, gpu cooler, founders edition, fan speed, fan issue
“NVIDIA has reportedly found the solution and the problem should will be fixed with the next driver release. NVIDIA rep confirmed that software team was able to reproduce this problem, and their fix has already passed internal testing.”
Image credit: PC Games Hardware
On the NVIDIA forums customer care representative Manuel Guzman has posted about the issue, and now it seems a fix will be provided with the next driver release:
“This thread is to keep users up to date on the status of the fan randomly spinning up and down rapidly that some users are reporting with their GeForce GTX 1080 Founders Edition card. Thank you for your patience.
Updates Sticky Post
Update 6/1/16 - We are testing a driver fix to address the random spin up/down fan issue.
Update 6/2/16 - Driver fix so far has passed internal testing. Fix will be part of our next driver release.”
For those who have experienced the “revving” issue, described as a rapid rise and fall from 2000 RPM to 3000 RPM in the post, this will doubtless come as welcome news. We will have to see how these cards perform once the updated driver has been released and is in user hands.
Subject: Graphics Cards | June 4, 2016 - 02:39 AM | Scott Michaud
Tagged: evga, sli, SLI HB, GTX 1080, nvidia, gtx 1070
Still no idea when these are coming out, or how much they'll cost, but EVGA will introduce their own custom SLI High-Bandwidth (HB) bridges. These are designed for the GTX 1080 and GTX 1070 cards in two-way SLI mode to share frames in high-speed 1440p, 4K, 5K, and Surround configurations. As TAP mentioned on our live stream, if the bridge was overloaded, it would fall back to communicating PCIe, which should be doing other things at the time.
NVIDIA's SLI HB connector.
We didn't have a presence at Computex, so you'll need to check out Fudzilla's photo for EVGA's.
As for EVGA's model? Without pricing and availability, all we can say is that they have a different aesthetic from NVIDIA's. They also, unlike NVIDIA's version, have RGB LEDs on them to add a splash (or another splash) of colored light inside your case. Three versions will be offered, varying with the distances between your cards, but, as the SLI HB spec demands, each of them only supports two at a time.
Subject: Graphics Cards | May 30, 2016 - 03:50 PM | Jeremy Hellstrom
Tagged: geforce, GP104, gtx 1070, nvidia, pascal
If we missed your favourite game, synthetic benchmark or a specific competitors card in our review of the new GTX 1070 then perhaps one of the sites below might satisfy your cravings. For instance, if it is Ashes of the Singularity or The Division which you want to see benchmarked the [H]ard|OCP has you covered. They also had a go at overclocking, with the new software they tweaked the card's fan speed to 100%, power target at 112%, and GPU Offset overclocking at +230. That resulted in a peak GPU speed of 2113MHz although the averaged frequency over a 30 minute gaming session was 2052MHz, they will revisit the card to overclock the memory in the near future. Check out their full review here.
"The second video card in the NVIDIA next generation Pascal GPU architecture is finally here, we will explore the GeForce GTX 1070 Founders Edition video card. In this limited preview today we will look at performance in comparison to GeForce GTX 980 Ti and Radeon R9 Fury X as well as some preview overclocking."
Here are some more Graphics Card articles from around the web:
- GeForce GTX 1070 FCAT Frametime Anaysis @ Guru of 3D
- NVIDIA GTX 1070 Review - The Revolution Continues @HiTech Legion
- Nvidia GeForce GTX 1070 @ Legion Hardware
- NVIDIA GeForce GTX 1070 Founders Edition Review @ OCC
- Radeon Linux 4.6 + Mesa 11.3 vs. NVIDIA Linux Performance & Perf-Per-Watt @ Phoronix
- XFX Radeon R9 Fury Triple Dissipation @ [H]ard|OCP
- NVIDIA GeForce GTX 1080 vs Titan X vs R9 Fury vs GTX 980 Ti vs GTX 980 vs R9 390X @HiTech Legion
- NVIDIA GeForce GTX 1080 Overclocking Review @ OCC
GP104 Strikes Again
It’s only been three weeks since NVIDIA unveiled the GeForce GTX 1080 and GTX 1070 graphics cards at a live streaming event in Austin, TX. But it feels like those two GPUs, one of which hasn't even been reviewed until today, have already drastically shifted the landscape of graphics, VR and PC gaming.
Half of the “new GPU” stories are told, with AMD due to follow up soon with Polaris, but it was clear to anyone watching the enthusiast segment with a hint of history that a line was drawn in the sand that day. There is THEN, and there is NOW. Today’s detailed review of the GeForce GTX 1070 completes NVIDIA’s first wave of NOW products, following closely behind the GeForce GTX 1080.
Interestingly, and in a move that is very uncharacteristic of NVIDIA, detailed specifications of the GeForce GTX 1070 were released on GeForce.com well before today’s reviews. With information on the CUDA core count, clock speeds, and memory bandwidth it was possible to get a solid sense of where the GTX 1070 performed; and I imagine that many of you already did the napkin math to figure that out. There is no more guessing though - reviews and testing are all done, and I think you'll find that the GTX 1070 is as exciting, if not more so, than the GTX 1080 due to the performance and pricing combination that it provides.
Let’s dive in.
Subject: Graphics Cards | May 29, 2016 - 05:46 PM | Scott Michaud
Tagged: msi, GTX 1080, sea hawk, gaming x, armor, Aero, nvidia
Beyond the Founders Edition, MSI has prepared six SKUs of the GTX 1080. These consists of four variants, two of which have an overclocked counterpart to make up the remaining two products. The product stack seems quite interesting, with a steady progression of user needs, but we'll need to wait for price and availability to know for sure.
We'll start at the bottom with the MSI GeForce GTX 1080 AERO 8G and MSI GeForce GTX 1080 AERO 8G OC. These are your typical blower designs that pull air in from within the case, and exhausts it out the back after collecting a bunch of heat from the GPU. It will work, it should be one of the cheapest options for this card, and it will keep the GTX 1080's heat outside of the case. It has a little silver accent on it, too. The non-overclocked version is the standard 1607 MHz / 1733 MHz that NVIDIA advertises, and the OC SKU is a little higher: 1632 MHz / 1771 MHz.
Next up the product stack are the MSI GeForce GTX 1080 ARMOR 8G and MSI GeForce GTX 1080 ARMOR 8G OC versions. This uses MSI's aftermarket, two-fan cooler that should provide much lower temperatures than AERO, but they exhaust back into the case. Personally? I don't really care about that. The only other thing that heats up in my case, to any concerning level at least, is my CPU, and I recently switched that to a closed-loop water cooler anyway. MSI added an extra, six-pin power connector to these cards (totaling 8-pin + 6-pin + slot power = up-to 300W, versus 8-pin + slot power's 225W). The non-overclocked version is NVIDIA's base 1607 MHz / 1733 MHz, but OC brings that up to 1657 MHz / 1797 MHz.
Speaking of closed-loop water coolers... The MSI GeForce GTX 1080 SEA HAWK takes the AERO design, which we mentioned earlier, and puts a Corsair self-contained water cooler inside it, too. Only one SKU of this is available, clocked at 1708 MHz base and 1847 MHz boost, but it should support overclocking fairly easily. That said, unlike other options that add a bonus six-pin connector, the SEA HAWK has just one, eight-pin connector. Good enough for the Founders Edition, but other SKUs (including three of the other cards in this post) suggest that there's a reason to up the power ceiling.
We now get to MSI's top, air-cooled SKU: the MSI GeForce GTX 1080 GAMING X 8G. This one has their new TWIN FROZR VI, which they claim spins quieter and has fans that drag more air to spin slower than previous models. It, as you would assume from reading about ARMOR 8G, has an extra, six-pin power connector to provide more overclocking headroom. It has three modes: Silent, which clocks the card to the standard 1607 MHz / 1733 MHz levels; Gaming, which significantly raises that to 1683 MHz / 1822 MHz; and OC, which bumps that slightly further to 1708 MHz / 1847 MHz.
Currently, no pricing and availability for any of these.
Subject: Graphics Cards | May 28, 2016 - 05:00 PM | Scott Michaud
Tagged: asus, ROG, strix, GTX 1080, nvidia
The Founders Edition versions of the GTX 1080 went on sale yesterday, but we're beginning to see the third-party variants being announced. In this case, the ASUS ROG Strix is a three-fan design that uses their DirectCU III heatsink. More interestingly, ASUS decided to increase the amount of wattage that this card can accept by adding an extra, six-pin PCIe power connector (totaling 8-pin + 6-pin). A Founders Edition card only requires a single, eight-pin connection over the 75W provided by the PCIe slot itself. This provides an extra 75W of play room for the ROG Strix card, raising the maximum power from 225W to 300W.
Some of this power will be used for its on-card, RGB LED lighting, but I doubt that it was the reason for the extra 75W of headroom. The lights follow the edges of the card, acting like hats and bow-ties to the three fans. (Yes, you will never unsee that now.) The shroud is also modular, and ASUS provides the data for enthusiasts to 3D print their own modifications (albeit their warranty doesn't cover damage caused by this level of customization).
As for the actual performance, the card naturally comes with an overclock out of the box. The default “Gaming Mode” has a 1759 MHz base clock with an 1898 MHz boost. You can flip this into “OC Mode” for a slight, two-digit increase to 1784 MHz base and 1936 MHz boost. It is significantly higher than the Founders Edition, though, which has a base clock of 1607 MHz that boosts to 1733 MHz. The extra power will likely help manual overclocks, but it will come down to “silicon lottery” whether your specific chip was abnormally less influenced by manufacturing defects. We also don't know yet whether the Pascal architecture, and the 16nm process it relies upon, has any physical limits that will increasingly resist overclocks past a certain frequency.
Pricing and availability is not yet announced.
First, Some Background
NVIDIA's Rumored GP102
When GP100 was announced, Josh and I were discussing, internally, how it would make sense in the gaming industry. Recently, an article on WCCFTech cited anonymous sources, which should always be taken with a dash of salt, that claimed NVIDIA was planning a second architecture, GP102, between GP104 and GP100. As I was writing this editorial about it, relating it to our own speculation about the physics of Pascal, VideoCardz claims to have been contacted by the developers of AIDA64, seemingly on-the-record, also citing a GP102 design.
I will retell chunks of the rumor, but also add my opinion to it.
In the last few generations, each architecture had a flagship chip that was released in both gaming and professional SKUs. Neither audience had access to a chip that was larger than the other's largest of that generation. Clock rates and disabled portions varied by specific product, with gaming usually getting the more aggressive performance for slightly better benchmarks. Fermi had GF100/GF110, Kepler had GK110/GK210, and Maxwell had GM200. Each of these were available in Tesla, Quadro, and GeForce cards, especially Titans.
Maxwell was interesting, though. NVIDIA was unable to leave 28nm, which Kepler launched on, so they created a second architecture at that node. To increase performance without having access to more feature density, you need to make your designs bigger, more optimized, or more simple. GM200 was giant and optimized, but, to get the performance levels it achieved, also needed to be more simple. Something needed to go, and double-precision (FP64) performance was the big omission. NVIDIA was upfront about it at the Titan X launch, and told their GPU compute customers to keep purchasing Kepler if they valued FP64.