Subject: Graphics Cards | June 21, 2016 - 05:22 PM | Scott Michaud
Tagged: nvidia, fermi, kepler, maxwell, pascal, gf100, gf110, GK104, gk110, GM204, gm200, GP104
Techspot published an article that compared eight GPUs across six, high-end dies in NVIDIA's last four architectures: Fermi to Pascal. Average frame rates were listed across nine games, each measured at three resolutions:1366x768 (~720p HD), 1920x1080 (1080p FHD), and 2560x1600 (~1440p QHD).
The results are interesting. Comparing GP104 to GF100, mainstream Pascal is typically on the order of four times faster than big Fermi. Over that time, we've had three full generational leaps in fabrication technology, leading to over twice the number of transistors packed into a die that is almost half the size. It does, however, show that prices have remained relatively constant, except that the GTX 1080 is sort-of priced in the x80 Ti category despite the die size placing it in the non-Ti class. (They list the 1080 at $600, but you can't really find anything outside the $650-700 USD range).
It would be interesting to see this data set compared against AMD. It's informative for an NVIDIA-only article, though.
Subject: Graphics Cards | June 20, 2016 - 04:11 PM | Jeremy Hellstrom
Tagged: windows 10, ubuntu, R9 Fury, nvidia, linux, GTX1070, amd
Phoronix wanted to test out how the new GTX 1070 and the R9 Fury compare on Ubuntu with new drivers and patches, as well as contrasting how they perform on Windows 10. There are two separate articles as the focus is not old silicon versus new but the performance comparison between the two operating systems. AMD was tested with the Crimson Edition 16.6.1 driver, AMDGPU-PRO Beta 2 (16.20.3) driver as well as Mesa 12.1-dev. There were interesting differences between the tested games as some would only support one of the two Linux drivers. The performance also varies based on the game engine, with some coming out in ties, others seeing Windows 10 pull ahead and even some cases where your performance on Linux was significantly better.
NVIDIA's GTX 1080 and 1070 were tested using the 368.39 driver release for Windows and the 367.27 driver for Ubuntu. Again we see mixed results, depending on the game Linux performance might actually beat out Windows, especially if OpenGL is an option.
Check out both reviews to see what performance you can expect from your GPU when gaming under Linux.
"Yesterday I published some Windows 10 vs. Ubuntu 16.04 Linux gaming benchmarks using the GeForce GTX 1070 and GTX 1080 graphics cards. Those numbers were interesting with the NVIDIA proprietary driver but for benchmarking this weekend are Windows 10 results with Radeon Software compared to Ubuntu 16.04 running the new AMDGPU-PRO hybrid driver as well as the latest Git code for a pure open-source driver stack."
Here are some more Graphics Card articles from around the web:
- GeForce GTX 1070 and GTX 1080 FE Overclocking @ [H]ard|OCP
- DX11 vs DX12 Intel 4770K vs 5960X Framerate Scaling @ [H]ard|OCP
- MSI GTX 1080 & GTX 1070 Gaming X 8G Overclocking Review @ OCC
- EVGA GeForce GTX 1080 FTW Gaming ACX 3.0 Review @HiTech Legion
- Gigabyte GTX 1080 G1 Gaming RGB @ Kitguru
- ASUS GTX 1080 Strix Gaming 8 GB @ techPowerUp
- HIS Radeon R7 360 GREEN iCooler OC 2GB Graphics Card Review @ NikKTech
Subject: Graphics Cards | June 20, 2016 - 01:57 PM | Scott Michaud
Tagged: tesla, pascal, nvidia, GP100
GP100, the “Big Pascal” chip that was announced at GTC, will be coming to PCIe for enterprise and supercomputer customers in Q4 2016. Previously, it was only announced using NVIDIA's proprietary connection. In fact, they also gave themselves some lead time with their first-party DGX-1 system, which retails for $129,000 USD, although we expect that was more for yield reasons. Josh calculated that each GPU in that system is worth more than the full wafer that its die was manufactured on.
This brings us to the PCIe versions. Interestingly, they have been down-binned from the NVLink version. The boost clock has been dropped to 1300 MHz, from 1480 MHz, although that is matched with a slightly lower TDP (250W versus the NVLink's 300W). This lowers the FP16 performance to 18.7 TFLOPs, down from 21.2, FP32 performance to 9.3 TFLOPs, down from 10.6, and FP64 performance to 4.7 TFLOPs, down from 5.3. This is where we get to the question: did NVIDIA reduce the clocks to hit a 250W TDP and be compatible with the passive cooling technology that previous Tesla cards utilize, or were the clocks dropped to increase yield?
They are also providing a 12GB version of the PCIe Tesla P100. I didn't realize that GPU vendors could selectively disable HBM2 stacks, but NVIDIA disabled 4GB of memory, which also dropped the bus width to 3072-bit. You would think that the simplicity of the circuit would want to divide work in a power-of-two fashion, but, knowing that they can, it makes me wonder why they did. Again, my first reaction is to question GP100 yield, but you wouldn't think that HBM, being such a small part of the die, is something that they can reclaim a lot of chips by disabling a chunk, right? That is, unless the HBM2 stacks themselves have yield issues -- which would be interesting.
There is also still no word on a 32GB version. Samsung claimed the memory technology, 8GB stacks of HBM2, would be ready for products in Q4 2016 or early 2017. We'll need to wait and see where, when, and why it will appear.
Subject: Graphics Cards | June 18, 2016 - 10:37 PM | Scott Michaud
Tagged: nvidia, graphics drivers
GeForce Hotfix 368.51 drivers have been released by NVIDIA through their support website. This version only officially addresses flickering at high refresh rates, although its number has been incremented quite a bit since the last official release (368.39) so it's possible that it rolls in other changes, too. That said, I haven't heard too many specific issues with 368.39, so I'm not quite sure what that would be.
As always with a hotfix driver, NVIDIA pushed it out with minimal testing. It should pretty much only be installed if you have a specific issue (particularly the listed one(s)) and you don't want to wait until one is released that both NVIDIA and Microsoft looked over (although Microsoft's WHQL certification has been pretty lax since Windows 10).
Oddly enough, they only seem to list 64-bit links for Windows 8.1 and Windows 10. I'm not sure whether this issue doesn't affect Windows 7 and 32-bit versions of 8.1 and 10, or if they just didn't want to push the hotfix out to them for some reason.
Subject: Graphics Cards | June 14, 2016 - 01:46 PM | Jeremy Hellstrom
Tagged: GTX1070, nvidia, overclocking
Overclocking the new Pascal GPUs can be accomplished with the EVGA Precision X tool as it allows you to bump up the power, temperature target and fan speed as well as the frequencies for the GPU and memory easily and effectively. [H]ard|OCP set out to push the 1070 as far as it would go with this software in a recent review. The power target can only be increased to 112%, which they implemented along with setting the fan to 100% as this is about the maximum performance, not about peace and quiet. After quite a bit of testing they settled on 2062MHz GPU and 4252MHz RAM clocks as the highest stable frequency this particular card could manage. The results show a card which leaves the TITAN X in the dirt and this card does not even have a custom cooler; we anxiously await the non-Founders Edition releases to see what they can accomplish.
"In our overclocking review of the NVIDIA GeForce GTX 1070 Founders Edition we will see how far we can overclock the GPU and memory and then compare performance with GeForce GTX TITAN X and GeForce GTX 980 Ti. How high will she go? Can the $449 GTX 1070 outperform a $1000 GTX TITAN X? The answer is exciting."
Here are some more Graphics Card articles from around the web:
- NVIDIA GeForce GTX 1070 On Linux: Testing With OpenGL, OpenCL, CUDA & Vulkan @ Phoronix
- MSI GeForce GTX 1070 Gaming X Review - It's RGB! @ HiTech Legion
- NVIDIA GeForce GTX 1070 8 GB @ techPowerUp
- MSI GTX 1080 & GTX 1070 Gaming X 8G Review @ OCC
- Deep Learning & CUDA Benchmarks On The GeForce GTX 1080 Under Linux @ Phoronix
- Gigabyte GTX 1080 G1 Gaming 8 GB @ techPowerUp
- Gigabyte GeForce GTX 1080 G1 GAMING @ Guru of 3D
- MSI GTX 1080 Gaming X 8 GB @ techPowerUp
Subject: Graphics Cards, Processors | June 13, 2016 - 03:51 PM | Scott Michaud
Tagged: amd, Polaris, Zen, Summit Ridge, rx 480, rx 470, rx 460
AMD has just unveiled their entire RX line of graphics cards at E3 2016's PC Gaming Show. It was a fairly short segment, but it had a few interesting points in it. At the end, they also gave another teaser of Summit Ridge, which uses the Zen architecture.
First, Polaris. As we know, the RX 480 was going to bring >5 TFLOPs at a $199 price point. They elaborated that this will apply to the 4GB version, which likely means that another version with more VRAM will be available, and that implies 8GB. Beyond the RX 480, AMD has also announced the RX 470 and RX 460. Little is known about the 470, but they mentioned that the 460 will have a <75W TDP. This is interesting because the PCIe bus provides 75W of power. This implies that it will not require any external power, and thus could be a cheap and powerful (in terms of esports titles) addition to an existing desktop. This is an interesting way to use the power savings of the die shrink to 14nm!
They also showed off a backpack VR rig. They didn't really elaborate, but it's here.
As for Zen? AMD showed the new architecture running DOOM, and added the circle-with-Zen branding to a 3D model of a CPU. Zen will be coming first to the enthusiast category with (up to?) eight cores, two threads per core (16 threads total).
The AMD Radeon RX 480 will launch on June 29th for $199 USD (4GB). None of the other products have a specific release date.
Subject: Graphics Cards | June 8, 2016 - 09:45 PM | Scott Michaud
That wasn't even capslock. That was pure shift key.
A little after Computex, GIGABYTE announced their GTX 1080 XTREME GAMING graphics card, which should be their flagship of the GeForce GTX 1080 line. It is a three-fan design, although the center fan overlaps with the two edge ones. It will also accept two, eight-pin PCIe power connectors, which gives a theoretical maximum draw of 375W.
Also, taking a cue from EVGA's recent VR-Edition 980 Ti, GIGABYTE includes a I/O front panel for cases with an extra 5.25” bay. This contains two HDMI ports and two USB 3.0 ports to allow users to quickly connect VR headsets to and from their PC. When connected, it disables two DisplayPort outputs on the card, routing them to the front-panel's HDMI instead. I'm not exactly clear on why you would need two HDMI connections in the front, but okay.
They are also releasing their own SLI HB connector, which they claim will support speeds up to 1080 MHz. The SLI HB standard, from NVIDIA, clocks up to 650 MHz. We assume that this is a typo on GIGABYTE's part (having 1080 on the brain for some reason...) but we've contacted NVIDIA to see what's up.
Currently no pricing or availability information. It comes with a three year warranty that can be upgraded to a four-year warranty by registering your product and signing up to their “XTREME GAMING Club”.
Subject: Graphics Cards | June 8, 2016 - 08:44 PM | Ryan Shrout
Tagged: sli, pascal, nvidia, GTX 1080, GP104, geforce, 4-way sli, 3-way sli
IMPORTANT UPDATE: After writing this story, but before publication, we went to NVIDIA for comment. As we were getting ready to publish, the company updated me with a shift in its stance on multi-GPU configurations. NVIDIA will no longer require an "enthusiast key" to enable SLI on more than two GPUs. However, NVIDIA will also only be enabling 3-Way and 4-Way SLI for a select few applications. More details are at the bottom of the story!
You'll likely recall that during our initial review of the GeForce GTX 1080 Founders Edition graphics card, we mentioned that NVIDIA was going to be moving people towards the idea that "only 2-Way SLI will be supported" and promoted. There would still be a path for users that wanted 3 and 4 GPU configurations anyway, and it would be called the Enthusiast Key.
As it turns out, after returning from an AMD event focused on its upcoming Polaris GPUs, I happen to have amassed a total of four GeForce GTX 1080 cards.
Courtesy of some friends at EVGA and two readers that were awesome enough to let me open up their brand new hardware for a day or so, I was able to go through the 3-Way and 4-Way SLI configuration process. Once all four were installed, and I must point out how great it is that each card only required a single 8-pin power connector, I installed the latest NVIDIA driver I had on hand, 368.19.
Knowing about the need for the Enthusiast Key, and also knowing that I did not yet have one and that the website that was supposed to be live to enable me to get one is still not live, I thought I might have stumbled upon some magic. The driver appeared to let me enable SLI anyway.
Enthusiasts will note however that the green marker under the four GPUs with the "SLI" text is clearly only pointing at two of the GTX 1080s, leaving the remaining two...unused. Crap.
At this point, if you have purchased more than two GeForce GTX 1080 cards are simply out of luck and are waiting on NVIDIA to make good on it's promise to allow for 3-Way and 4-Way configurations via the Enthusiast Key. Or some other way. It's way too late now to simply say "we aren't supporting it at all."
While I wait...what is there for a gamer with four GeForce GTX 1080 cards to do? Well, you could run Ashes of the Singularity. It's multi-GPU mode uses MDA mode, which means the game engine itself accesses each GPU on its own, without the need for the driver to handle anything regarding GPU load balancing. Unfortunately, Ashes only supports two GPUs today.
Well...you could run an OpenCL based benchmark like LuxMark that access all the GPUs independently as well.
I did so, and the result is an impressive score of 17,127!!
How does that compare to some other products?
The four GTX 1080 cards produce a score that is 2.57x the result provided by the AMD Radeon Pro Duo and 2.29x the score of SLI GeForce GTX 980 Ti cards. Nice!
So there you go! We are just as eager to get our hands on the ability to test 3-Way and 4-Way SLI with new Pascal GPUs as some of the most extreme and dedicated enthusiasts out there are. With any luck, NVIDIA will finally figure out a way to allow it - no matter how it finally takes place.
IMPORTANT UPDATE: Before going to press with this story I asked NVIDIA for comment directly: when was the community finally going to get the Enthusiast Key website to unlock 3-Way and 4-Way SLI for those people crazy enough to have purchased that many GTX 1080s? The answer was quite surprising: NVIDIA is backing away from the idea of an "Enthusiast Key" and will no longer require it for enabling 3-Way and 4-Way SLI.
Here is the official NVIDIA statement given to PC Perspective on the subject:
With the GeForce 10-series we’re investing heavily in 2-way SLI with our new High Bandwidth bridge (which doubles the SLI bandwidth for faster, smoother gaming at ultra-high resolutions and refresh rates) and NVIDIA Game Ready Driver SLI profiles. To ensure the best possible gaming experience on our GeForce 10-series GPUs, we’re focusing our efforts on 2-way SLI only and will continue to include 2-way SLI profiles in our Game Ready Drivers.
DX12 and NVIDIA VR Works SLI technology also allows developers to directly implement and control multi-GPU support within their games. If a developer chooses to use these technologies then their game will not need SLI profiles. Some developers may also decide to support more than 2 GPUs in their games. We continue to work with all developers creating games and VR applications that take advantage of 2 or more GPUs to make sure they’ll work great on GeForce 10-series GPUs.
For our overclocking community, our Game Ready Drivers will also include SLI profiles for 3- and 4-way configurations for specific OC applications only, including Fire Strike, Unigine and Catzilla.
NVIDIA clearly wants to reiterate that only 2-Way SLI will get the attention that we have come to expect from the GeForce driver dev team. As DX12 and Vulkan next-generation APIs become more prolific, the game developers will still have the ability to directly access more than two GeForce GTX 10-series GPUs, though I expect that be a very narrow window of games simply due to development costs and time.
NVIDIA will enable support for three and four card configurations in future drivers (without a key) for specific overclocking/benchmarking tools only, as a way to make sure the GeForce brand doesn't fall off the 3DMark charts. Only those specific applications will be able operate in the 3-Way and 4-Way SLI configurations that you have come to know. There are no profiles to change manually and even the rare games that might have "just worked" with three or four GPUs will not take advantage of more than two GTX 10-series cards. It's fair to say at this point that except for the benchmarking crowd, NVIDIA 3-Way and 4-Way SLI is over.
We expect the "benchmark only" mode of 3-Way and 4-Way SLI to be ready for consumers with the next "Game Ready" driver release. If you happened to get your hands on more than two GTX 1080s but aren't into benchmarking, then find those receipts and send a couple back.
So there you have it. Honestly, this is what I was expecting from NVIDIA with the initial launch of Pascal and the GeForce GTX 1080/1070 and I was surprised when I first heard about the idea of the "enthusiast key." It took a bit longer than expected, and NVIDIA will get more flak for the iterated dismissal of this very niche, but still pretty cool, technology. In the end, this won't have much impact on the company's bottom line as the quantity of users that were buying 3+ GTX GPUs for a single system was understandably small.
Subject: Graphics Cards | June 8, 2016 - 06:26 PM | Scott Michaud
Tagged: nvidia, giveaway, e3 2016, E3
Update, June 8th @ 8:15pm: Just to clarify, this giveaway is not affiliated with PC Perspective. We just found it on Twitter and thought that our readers might like to have a chance at free hardware.
Fairly simple bit of news for this one. NVIDIA has announced that they will be giving away $100,000 of prizes to people who message @NVIDIA and use the #GameReady hashtag, on either Twitter or Instagram, during one of five E3 keynotes.
Sunday (June 12th, 2016):
- EA at 1PM PDT / 4PM EDT / 8PM GMT
- Bethesda at 7PM PDT / 10PM EDT / 2AM GMT (Monday)
Monday (June 13th, 2016):
- Microsoft at 9:30AM PDT / 12:30PM EDT / 4:30PM GMT
- PC Gaming Show at 12PM PDT / 3PM EDT / 7PM GMT
- Ubisoft at 1PM PDT / 4PM EDT / 8PM GMT
Interestingly, Sony was not listed on their rundown. Sure, they rarely have anything relevant to PC gamers, but it's still an amusing omission none-the-less.
According to their Terms and Conditions, the sweepstakes is open to a large portion of the world. They will be giving away fifty GTX 1080s, “up to” thirty $500 Steam Gift Cards, and “an ultimate PC battlestation”??? I'm not sure what that is, but it sounds like Mark Hamill will be trying to destroy it a few times.
E3 starts this weekend! Stay tuned for coverage. (You can also sleep, eat, and do laundry, though.)
Subject: Graphics Cards | June 8, 2016 - 02:11 AM | Scott Michaud
Tagged: nvidia, graphics drivers
NVIDIA has released a new graphics driver, in line with EA's new title, Mirror's Edge: Catalyst. Version 368.39 is another of their WHQL-certified, Game Ready-branded drivers that integrates all of their tweaks to improve the game's performance, including an updated SLI profile. It also includes performance tweaks for Insomniac's Oculus-exclusive VR title, Edge of Nowhere, which released on June 6th.
Beyond performance enhancements for specific titles, the driver also includes new features and fixes to known bugs. On the feature side of things, a handful of OpenGL extensions were added to support new features in Pascal. Extensions allow hardware vendors to add features without the Khronos Group needing to officially support it in the standard (although many turn multi-vendor and eventually end up in a later core specification). In this case, NVIDIA has added Single Pass Stereo to increase VR performance, Lens Matched Shading to also increase VR performance, Improved Conservative Rasterization to reduce the chance that a pixel fragment will be missed during rasterization of degenerate or otherwise odd geometry, and Double Precision Atomic Operations to increase reliability when doing GPU-compute on 64-bit, double-precision values in OpenGL.
On Windows 10, seven bugs were fixed in 368.39, and two of those were fairly high profile. First, the GTX 1080 Founders Edition fan speed revving issue has been fixed, as NVIDIA mentioned a few days ago. Second, performance issues (stuttering) in Total War: WARHAMMER were fixed. They also fixed an issue where Metal Gear Solid V would fail to launch (white screen).
The new drivers are available on GeForce Experience or their website.