EVGA Shows Off New High Bandwidth "Pro SLI Bridge HB" Bridges

Subject: Graphics Cards | June 22, 2016 - 02:40 AM |
Tagged: SLI HB, nvidia, EVGA SLI HB

Earlier this month we reported that EVGA would be producing its own version of Nvidia's SLI High Bandwidth bridges (aka SLI HB). Today, the company unveiled all the details on its new bridges that we did not know previously, particularly pricing and what the connectors look like.

EVGA is calling the new SLI HB bridges the EVGA Pro SLI HB Bridge and it will be available in several sizes to accommodate your particular card spacing. Note that the 0 slot, 1 slot, 2 slot, and 4 slot spacing bridges are all for two graphics card setups; you will not be able to use these bridges for Tri SLI or Quad SLI setups. While Nvidia did not show the underside of the HB bridges when it first announced them alongside the GTX 1080 graphics card, thanks to EVGA you can finally see what the connectors look like.

EVGA Pro SLI HB Bridge.jpg

As many surmised, the new high bandwidth bridges use both fingers of the SLI connectors on each card to connect the two cards together. Previously (using the old-style SLI bridges), it was possible to connect card A to card B using one set of connectors and Card B to Card C using the second set of connectors for example. Now, you are limited to two card multi-GPU setups. That is the downside; however, the upside is that the HB bridges promise to deliver all of the necessary bandwidth to allow for high speed 4K and NVIDIA Surround display setups. While you will not necessarily see higher frame rates, the HB bridges should allow for improved frame times which will mean smoother gameplay on those very high resolution monitors!

sli_rgb_full_animated.gif

The new SLI bridges are all black with an EVGA logo in the middle that is backlit by an LED. Users are able to use a switch along the bottom edge of the pcb to select from red, green, blue, and white LED colors. In my opinion these bridges look a lot better than the Nvidia SLI HB bridge renders from our computex story (hehe).

Now, as for pricing: EVGA is pricing its SLI HB bridges at $39.99 with the 2 slot spacing and 4 slot spacing bridges available now and the 0 slot and 1 slot spaced bridges set to be available soon (you can sign up to be notified when they are available for purchase). Hopefully reviews will be updated shortly around the net with the new bridges to see what impact they really have on multi-GPU gaming performance (or if they will just be better looking alternatives to the older LED bridges or ribbon bridges)!

Also read: 

Source: EVGA

Whoops! AMD Radeon RX 480 Specifications on Newegg

Subject: Graphics Cards | June 21, 2016 - 08:49 PM |
Tagged: rx 480, Radeon RX 480, polaris 10, Polaris, amd

The AMD Radeon RX 480 is set to launch on June 29th, but a VisionTek model was published a little early (now unpublished -- thanks to our long-time reader, Arbiter, for the heads up). Basically all specifications were already shared, and Ryan wrote about them on June 1st, but the final clock rates were unknown. The VisionTek one, on the other hand, has it listed as 1120 MHz (5.16 TFLOPs) with a boost of 1266 MHz (5.83 TFLOPs).

amd-2016-polaris-rx480visionteknewegg.jpg

Granted, it's possible that the VisionTek model could be overclocked, even though the box and product page doesn't mark it as a factory-overclocked SKU. Also, 5.16 TFLOPs and 5.83 TFLOPs align pretty close to AMD's “>5 TFLOPs” rating, so it's unlikely that the canonical specifications slide underneath this one. Also, TFLOP ratings are basically a theoretical maximum performance, so real-world benchmarks need to be considered for a true measure of performance. That said, this would put the stock RX 480 in the range of a GTX 980 (somewhere above its listed boost clock, and slightly below its expected TFLOP rating when overclocked).

There is no price listed for the 8GB model, but the 4GB version will be $199 USD.

Fermi, Kepler, Maxwell, and Pascal Comparison Benchmarks

Subject: Graphics Cards | June 21, 2016 - 05:22 PM |
Tagged: nvidia, fermi, kepler, maxwell, pascal, gf100, gf110, GK104, gk110, GM204, gm200, GP104

Techspot published an article that compared eight GPUs across six, high-end dies in NVIDIA's last four architectures: Fermi to Pascal. Average frame rates were listed across nine games, each measured at three resolutions:1366x768 (~720p HD), 1920x1080 (1080p FHD), and 2560x1600 (~1440p QHD).

nvidia-2016-dreamhack-1080-stockphoto.png

The results are interesting. Comparing GP104 to GF100, mainstream Pascal is typically on the order of four times faster than big Fermi. Over that time, we've had three full generational leaps in fabrication technology, leading to over twice the number of transistors packed into a die that is almost half the size. It does, however, show that prices have remained relatively constant, except that the GTX 1080 is sort-of priced in the x80 Ti category despite the die size placing it in the non-Ti class. (They list the 1080 at $600, but you can't really find anything outside the $650-700 USD range).

It would be interesting to see this data set compared against AMD. It's informative for an NVIDIA-only article, though.

Source: Techspot

Windows 10 versus Ubuntu 16.04 versus NVIDIA versus AMD

Subject: Graphics Cards | June 20, 2016 - 04:11 PM |
Tagged: windows 10, ubuntu, R9 Fury, nvidia, linux, GTX1070, amd

Phoronix wanted to test out how the new GTX 1070 and the R9 Fury compare on Ubuntu with new drivers and patches, as well as contrasting how they perform on Windows 10.  There are two separate articles as the focus is not old silicon versus new but the performance comparison between the two operating systems.  AMD was tested with the Crimson Edition 16.6.1 driver, AMDGPU-PRO Beta 2 (16.20.3) driver as well as Mesa 12.1-dev.  There were interesting differences between the tested games as some would only support one of the two Linux drivers.  The performance also varies based on the game engine, with some coming out in ties, others seeing Windows 10 pull ahead and even some cases where your performance on Linux was significantly better.

NVIDIA's GTX 1080 and 1070 were tested using the 368.39 driver release for Windows and the 367.27 driver for Ubuntu.  Again we see mixed results, depending on the game Linux performance might actually beat out Windows, especially if OpenGL is an option. 

Check out both reviews to see what performance you can expect from your GPU when gaming under Linux.

image.php_.jpg

"Yesterday I published some Windows 10 vs. Ubuntu 16.04 Linux gaming benchmarks using the GeForce GTX 1070 and GTX 1080 graphics cards. Those numbers were interesting with the NVIDIA proprietary driver but for benchmarking this weekend are Windows 10 results with Radeon Software compared to Ubuntu 16.04 running the new AMDGPU-PRO hybrid driver as well as the latest Git code for a pure open-source driver stack."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

NVIDIA Announces PCIe Versions of Tesla P100

Subject: Graphics Cards | June 20, 2016 - 01:57 PM |
Tagged: tesla, pascal, nvidia, GP100

GP100, the “Big Pascal” chip that was announced at GTC, will be coming to PCIe for enterprise and supercomputer customers in Q4 2016. Previously, it was only announced using NVIDIA's proprietary connection. In fact, they also gave themselves some lead time with their first-party DGX-1 system, which retails for $129,000 USD, although we expect that was more for yield reasons. Josh calculated that each GPU in that system is worth more than the full wafer that its die was manufactured on.

nvidia-2016-gp100tesla.jpg

This brings us to the PCIe versions. Interestingly, they have been down-binned from the NVLink version. The boost clock has been dropped to 1300 MHz, from 1480 MHz, although that is matched with a slightly lower TDP (250W versus the NVLink's 300W). This lowers the FP16 performance to 18.7 TFLOPs, down from 21.2, FP32 performance to 9.3 TFLOPs, down from 10.6, and FP64 performance to 4.7 TFLOPs, down from 5.3. This is where we get to the question: did NVIDIA reduce the clocks to hit a 250W TDP and be compatible with the passive cooling technology that previous Tesla cards utilize, or were the clocks dropped to increase yield?

They are also providing a 12GB version of the PCIe Tesla P100. I didn't realize that GPU vendors could selectively disable HBM2 stacks, but NVIDIA disabled 4GB of memory, which also dropped the bus width to 3072-bit. You would think that the simplicity of the circuit would want to divide work in a power-of-two fashion, but, knowing that they can, it makes me wonder why they did. Again, my first reaction is to question GP100 yield, but you wouldn't think that HBM, being such a small part of the die, is something that they can reclaim a lot of chips by disabling a chunk, right? That is, unless the HBM2 stacks themselves have yield issues -- which would be interesting.

There is also still no word on a 32GB version. Samsung claimed the memory technology, 8GB stacks of HBM2, would be ready for products in Q4 2016 or early 2017. We'll need to wait and see where, when, and why it will appear.

Source: NVIDIA

NVIDIA Releases 368.51 Hotfix Driver

Subject: Graphics Cards | June 18, 2016 - 10:37 PM |
Tagged: nvidia, graphics drivers

GeForce Hotfix 368.51 drivers have been released by NVIDIA through their support website. This version only officially addresses flickering at high refresh rates, although its number has been incremented quite a bit since the last official release (368.39) so it's possible that it rolls in other changes, too. That said, I haven't heard too many specific issues with 368.39, so I'm not quite sure what that would be.

nvidia-2015-bandaid.png

As always with a hotfix driver, NVIDIA pushed it out with minimal testing. It should pretty much only be installed if you have a specific issue (particularly the listed one(s)) and you don't want to wait until one is released that both NVIDIA and Microsoft looked over (although Microsoft's WHQL certification has been pretty lax since Windows 10).

Oddly enough, they only seem to list 64-bit links for Windows 8.1 and Windows 10. I'm not sure whether this issue doesn't affect Windows 7 and 32-bit versions of 8.1 and 10, or if they just didn't want to push the hotfix out to them for some reason.

Source: NVIDIA

How far can a GTX 1070 Founders Edition go?

Subject: Graphics Cards | June 14, 2016 - 01:46 PM |
Tagged: GTX1070, nvidia, overclocking

Overclocking the new Pascal GPUs can be accomplished with the EVGA Precision X tool as it allows you to bump up the power, temperature target and fan speed as well as the frequencies for the GPU and memory easily and effectively.  [H]ard|OCP set out to push the 1070 as far as it would go with this software in a recent review.  The power target can only be increased to 112%, which they implemented along with setting the fan to 100% as this is about the maximum performance, not about peace and quiet.  After quite a bit of testing they settled on 2062MHz GPU and 4252MHz RAM clocks as the highest stable frequency this particular card could manage.  The results show a card which leaves the TITAN X in the dirt and this card does not even have a custom cooler; we anxiously await the non-Founders Edition releases to see what they can accomplish.

1465810411UrsXB3Z0D6_1_1.gif

"In our overclocking review of the NVIDIA GeForce GTX 1070 Founders Edition we will see how far we can overclock the GPU and memory and then compare performance with GeForce GTX TITAN X and GeForce GTX 980 Ti. How high will she go? Can the $449 GTX 1070 outperform a $1000 GTX TITAN X? The answer is exciting."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

AMD "Sneak Peek" at RX Series (RX 480, RX 470, RX 460)

Subject: Graphics Cards, Processors | June 13, 2016 - 03:51 PM |
Tagged: amd, Polaris, Zen, Summit Ridge, rx 480, rx 470, rx 460

AMD has just unveiled their entire RX line of graphics cards at E3 2016's PC Gaming Show. It was a fairly short segment, but it had a few interesting points in it. At the end, they also gave another teaser of Summit Ridge, which uses the Zen architecture.

amd-2016-e3-470460.png

First, Polaris. As we know, the RX 480 was going to bring >5 TFLOPs at a $199 price point. They elaborated that this will apply to the 4GB version, which likely means that another version with more VRAM will be available, and that implies 8GB. Beyond the RX 480, AMD has also announced the RX 470 and RX 460. Little is known about the 470, but they mentioned that the 460 will have a <75W TDP. This is interesting because the PCIe bus provides 75W of power. This implies that it will not require any external power, and thus could be a cheap and powerful (in terms of esports titles) addition to an existing desktop. This is an interesting way to use the power savings of the die shrink to 14nm!

amd-2016-e3-backpackpc.png

They also showed off a backpack VR rig. They didn't really elaborate, but it's here.

amd-2016-e3-summitdoom.png

As for Zen? AMD showed the new architecture running DOOM, and added the circle-with-Zen branding to a 3D model of a CPU. Zen will be coming first to the enthusiast category with (up to?) eight cores, two threads per core (16 threads total).

amd-2016-e3-zenlogo.png

The AMD Radeon RX 480 will launch on June 29th for $199 USD (4GB). None of the other products have a specific release date.

Source: AMD

GIGABYTE ANNOUNCES GTX 1080 XTREME GAMING

Subject: Graphics Cards | June 8, 2016 - 09:45 PM |
Tagged:

That wasn't even capslock. That was pure shift key.

A little after Computex, GIGABYTE announced their GTX 1080 XTREME GAMING graphics card, which should be their flagship of the GeForce GTX 1080 line. It is a three-fan design, although the center fan overlaps with the two edge ones. It will also accept two, eight-pin PCIe power connectors, which gives a theoretical maximum draw of 375W.

gigabyte-2016-GTX1080XG_03.png

Also, taking a cue from EVGA's recent VR-Edition 980 Ti, GIGABYTE includes a I/O front panel for cases with an extra 5.25” bay. This contains two HDMI ports and two USB 3.0 ports to allow users to quickly connect VR headsets to and from their PC. When connected, it disables two DisplayPort outputs on the card, routing them to the front-panel's HDMI instead. I'm not exactly clear on why you would need two HDMI connections in the front, but okay.

gigabyte-slihb-claims.jpg

They are also releasing their own SLI HB connector, which they claim will support speeds up to 1080 MHz. The SLI HB standard, from NVIDIA, clocks up to 650 MHz. We assume that this is a typo on GIGABYTE's part (having 1080 on the brain for some reason...) but we've contacted NVIDIA to see what's up.

Currently no pricing or availability information. It comes with a three year warranty that can be upgraded to a four-year warranty by registering your product and signing up to their “XTREME GAMING Club”.

Source: GIGABYTE

GeForce GTX 1080 and 1070 3-Way and 4-Way SLI will not be enabled for games

Subject: Graphics Cards | June 8, 2016 - 08:44 PM |
Tagged: sli, pascal, nvidia, GTX 1080, GP104, geforce, 4-way sli, 3-way sli

IMPORTANT UPDATE: After writing this story, but before publication, we went to NVIDIA for comment. As we were getting ready to publish, the company updated me with a shift in its stance on multi-GPU configurations. NVIDIA will no longer require an "enthusiast key" to enable SLI on more than two GPUs. However, NVIDIA will also only be enabling 3-Way and 4-Way SLI for a select few applications. More details are at the bottom of the story!

You'll likely recall that during our initial review of the GeForce GTX 1080 Founders Edition graphics card, we mentioned that NVIDIA was going to be moving people towards the idea that "only 2-Way SLI will be supported" and promoted. There would still be a path for users that wanted 3 and 4 GPU configurations anyway, and it would be called the Enthusiast Key.

As it turns out, after returning from an AMD event focused on its upcoming Polaris GPUs, I happen to have amassed a total of four GeForce GTX 1080 cards.

01.jpg

Courtesy of some friends at EVGA and two readers that were awesome enough to let me open up their brand new hardware for a day or so, I was able to go through the 3-Way and 4-Way SLI configuration process. Once all four were installed, and I must point out how great it is that each card only required a single 8-pin power connector, I installed the latest NVIDIA driver I had on hand, 368.19.

driver2.jpg

Knowing about the need for the Enthusiast Key, and also knowing that I did not yet have one and that the website that was supposed to be live to enable me to get one is still not live, I thought I might have stumbled upon some magic. The driver appeared to let me enable SLI anyway. 

driver1.jpg

Enthusiasts will note however that the green marker under the four GPUs with the "SLI" text is clearly only pointing at two of the GTX 1080s, leaving the remaining two...unused. Crap.

At this point, if you have purchased more than two GeForce GTX 1080 cards are simply out of luck and are waiting on NVIDIA to make good on it's promise to allow for 3-Way and 4-Way configurations via the Enthusiast Key. Or some other way. It's way too late now to simply say "we aren't supporting it at all." 

03.jpg

While I wait...what is there for a gamer with four GeForce GTX 1080 cards to do? Well, you could run Ashes of the Singularity. It's multi-GPU mode uses MDA mode, which means the game engine itself accesses each GPU on its own, without the need for the driver to handle anything regarding GPU load balancing. Unfortunately, Ashes only supports two GPUs today.

Well...you could run an OpenCL based benchmark like LuxMark that access all the GPUs independently as well.

lux2.jpg

I did so, and the result is an impressive score of 17,127!!

lux.jpg

How does that compare to some other products?

luxmarkgraph.jpg

The four GTX 1080 cards produce a score that is 2.57x the result provided by the AMD Radeon Pro Duo and 2.29x the score of SLI GeForce GTX 980 Ti cards. Nice!

02.jpg

So there you go! We are just as eager to get our hands on the ability to test 3-Way and 4-Way SLI with new Pascal GPUs as some of the most extreme and dedicated enthusiasts out there are. With any luck, NVIDIA will finally figure out a way to allow it - no matter how it finally takes place.

IMPORTANT UPDATE: Before going to press with this story I asked NVIDIA for comment directly: when was the community finally going to get the Enthusiast Key website to unlock 3-Way and 4-Way SLI for those people crazy enough to have purchased that many GTX 1080s? The answer was quite surprising: NVIDIA is backing away from the idea of an "Enthusiast Key" and will no longer require it for enabling 3-Way and 4-Way SLI. 

Here is the official NVIDIA statement given to PC Perspective on the subject:

With the GeForce 10-series we’re investing heavily in 2-way SLI with our new High Bandwidth bridge (which doubles the SLI bandwidth for faster, smoother gaming at ultra-high resolutions and refresh rates) and NVIDIA Game Ready Driver SLI profiles.  To ensure the best possible gaming experience on our GeForce 10-series GPUs, we’re focusing our efforts on 2-way SLI only and will continue to include 2-way SLI profiles in our Game Ready Drivers.
 
DX12 and NVIDIA VR Works SLI technology also allows developers to directly implement and control multi-GPU support within their games.  If a developer chooses to use these technologies then their game will not need SLI profiles.  Some developers may also decide to support more than 2 GPUs in their games. We continue to work with all developers creating games and VR applications that take advantage of 2 or more GPUs to make sure they’ll work great on GeForce 10-series GPUs.
 
For our overclocking community, our Game Ready Drivers will also include SLI profiles for 3- and 4-way configurations for specific OC applications only, including Fire Strike, Unigine and Catzilla.

NVIDIA clearly wants to reiterate that only 2-Way SLI will get the attention that we have come to expect from the GeForce driver dev team. As DX12 and Vulkan next-generation APIs become more prolific, the game developers will still have the ability to directly access more than two GeForce GTX 10-series GPUs, though I expect that be a very narrow window of games simply due to development costs and time.

NVIDIA will enable support for three and four card configurations in future drivers (without a key) for specific overclocking/benchmarking tools only, as a way to make sure the GeForce brand doesn't fall off the 3DMark charts. Only those specific applications will be able operate in the 3-Way and 4-Way SLI configurations that you have come to know. There are no profiles to change manually and even the rare games that might have "just worked" with three or four GPUs will not take advantage of more than two GTX 10-series cards. It's fair to say at this point that except for the benchmarking crowd, NVIDIA 3-Way and 4-Way SLI is over.

We expect the "benchmark only" mode of 3-Way and 4-Way SLI to be ready for consumers with the next "Game Ready" driver release. If you happened to get your hands on more than two GTX 1080s but aren't into benchmarking, then find those receipts and send a couple back.

So there you have it. Honestly, this is what I was expecting from NVIDIA with the initial launch of Pascal and the GeForce GTX 1080/1070 and I was surprised when I first heard about the idea of the "enthusiast key." It took a bit longer than expected, and NVIDIA will get more flak for the iterated dismissal of this very niche, but still pretty cool, technology. In the end, this won't have much impact on the company's bottom line as the quantity of users that were buying 3+ GTX GPUs for a single system was understandably small.