NVIDIA Releases 368.69 WHQL Drivers for Dirt Rally VR

Subject: Graphics Cards | July 6, 2016 - 05:10 PM |
Tagged: VR, Oculus, nvidia, graphics drivers, DiRT Rally

A Game Ready Driver has just launched for DiRT Rally VR. GeForce Drivers 368.69 WHQL increments upon the last release, obviously adding optimizations for DiRT Rally VR, but it also includes a few new SLI profiles (Armored Warfare, Dangerous Golf, iRacing: Motorsport Simulator, Lost Ark, and Tiger Knight) and probably other bug fixes.

nvidia-geforce.png

The update doesn't yet have a release date, but it should be soon. According to NVIDIA's blog post, it sounds like it will come first to the Oculus Store, but arrive on Steam later this month. I haven't been following the game too heavily, but there doesn't seem to be any announcement about official HTC Vive support that I can find.

You can pick them up at NVIDIA's website or through GeForce Experience. Thankfully, the GeForce Experience 3 Beta seems to pick up on new drivers much quicker than the previous version.

Source: NVIDIA

Vive, DisplayPort, and GP104 Apparently Don't Mix For Now

Subject: Graphics Cards | July 6, 2016 - 07:15 AM |
Tagged: pascal, nvidia, htc vive, GTX 1080, gtx 1070, GP104

NVIDIA is working on a fix to allow the HTC Vive to be connected to the GeForce GTX 1070 and GTX 1080 over DisplayPort. The HTC Vive apparently has the choice between HDMI and Mini DisplayPort, but the headset will not be identified when connected over that connection. Currently, the two workarounds are to connect the HTC Vive over HDMI, or use a DisplayPort to HDMI adapter if your card's HDMI output is already occupied.

nvidia-2016-dreamhack-1080-stockphoto.png

It has apparently been an open issue for over a month now. That said, NVIDIA's Manuel Guzman has acknowledged the issue. Other threads claim that there are other displays that have a similar issue, and, within the last 24 hours, some users have experienced luck with modifying their motherboard's settings. I'd expect that it's something the can fix in an upcoming driver, though. For now, I guess plan your monitor outputs accordingly if you were planning on getting the HTC Vive.

Source: NVIDIA

NVIDIA Announces GeForce Experience 3.0 Beta

Subject: Graphics Cards | July 2, 2016 - 01:25 AM |
Tagged: nvidia, geforce, geforce experience

GeForce Experience will be getting an updated UI soon, and a beta release is available now. It has basically been fully redesigned, although the NVIDIA Control Panel is the same as it has been. That said, even though it is newer, GeForce Experience could benefit from a good overhaul, especially in terms of start-up delay. NVIDIA says it uses 2X less memory and loads 3X faster. It still has a slightly loading bar, but less than a second.

nvidia-2016-gfe3-01.png

Interestingly, I noticed that, even though I skipped over Sharing Settings on first launch, Instant Replay was set to On by default. This could have been carried over from my previous instance of GeForce Experience, although I'm pretty sure I left it off. Privacy-conscious folks might want to verify that ShadowPlay isn't running, just in case.

nvidia-2016-gfe3-02.png

One downside for some of our users is that you now require an NVIDIA account (or connect your Google Account to NVIDIA) to access it. Previously, you could use features, like ShadowPlay, while logged out, but that doesn't appear to be the case anymore. This will no-doubt upset some of our audience, but it's not entirely unexpected, given NVIDIA's previous statements about requiring an NVIDIA account for Beta drivers. The rest of GeForce Experience isn't too surprising considering that.

nvidia-2016-gfe3-03.png

We'll now end where we began: installation. For testing (and hopefully providing feedback) during the beta, NVIDIA will be giving away GTX 1080s on a weekly basis. To enter, you apparently just need to install the Beta and log in with your NVIDIA (or Google) account.

Source: NVIDIA

AMD RX 480 (and NVIDIA GTX 1080) Launch Demand

Subject: Graphics Cards | June 30, 2016 - 07:54 PM |
Tagged: amd, nvidia, FinFET, Polaris, polaris 10, pascal

If you're trying to purchase a Pascal or Polaris-based GPU, then you are probably well aware that patience is a required virtue. The problem is that, as a hardware website, we don't really know whether the issue is high demand or low supply. Both are manufactured on a new process node, which could mean that yield is a problem. On the other hand, it's been about four years since the last fabrication node, which means that chips got much smaller for the same performance.

amd-2016-rx480-candid.jpg

Over time, manufacturing processes will mature, and yield will increase. But what about right now? AMD made a very small chip that produces ~GTX 970-level performance. NVIDIA is sticking with their typical, 3XXmm2 chip, which ended up producing higher than Titan X levels of performance.

It turns out that, according to online retailer, Overclockers UK, via Fudzilla, both the RX480 and GTX 1080 have sold over a thousand units at that location alone. That's quite a bit, especially when you consider that it only considers one (large) online retailer from Europe. It's difficult to say how much stock other stores (and regions) received compared to them, but it's still a thousand units in a day.

It's sounding like, for both vendors, pent-up demand might be the dominant factor.

Source: Fudzilla

Report: Image of Reference Design NVIDIA GTX 1060 Leaked

Subject: Graphics Cards | June 28, 2016 - 10:26 AM |
Tagged: nvidia, GeForce GTX 1060, GTX1060, rumor, report, leak, pascal, graphics card, video card

A report from VideoCardz.com shows what appears to be an NVIDIA GeForce GTX 1060 graphics card with a cooler similar to the "Founders Edition" GTX 1080/1070 design.

NVIDIA-GeForce-GTX-1060.jpg

Is this the GTX 1060 reference design? (Image via VideoCardz.com)

The image comes via Reddit (original source links in the VideoCardz post), and we cannot verify the validity of the image - though it certainly looks convincing to this writer.

So what does VideoCardz offer as to the specifications of this GTX 1060 card? Quoting from the post:

"NVIDIA GeForce GTX 1060 will most likely use GP106 GPU with at least 1280 CUDA cores. Earlier rumors suggested that GTX 1060 might get 6 GB GDDR5 memory and 192-bit memory bus."

We await official word on the GTX 1060 from NVIDIA, which VideoCardz surmises "is expected to hit the market shortly after Radeon RX 480".

Source: VideoCardz

Frame Time Monday; this time with the GTX 1080

Subject: Graphics Cards | June 27, 2016 - 04:55 PM |
Tagged: pascal, nvidia, GTX 1080, gtx, GP104, geforce, founders edition

You have already seen our delve into the frame times provided by the GTX 1080 but perhaps you would like another opinion.  The Tech Report also uses the FCAT process which we depend upon to bring you frame time data, however they present the data in a slightly different way which might help you to comprehend the data.  They also included Crysis 3 to ensure that the card can indeed play it.  Check out their full review here.

chip.jpg

"Nvidia's GeForce GTX 1080 is the company's first consumer graphics card to feature its new Pascal architecture, fabricated on a next-generation 16-nm process. We dig deep into the GTX 1080 to see what the confluence of these advances means for the high-end graphics market."

Here are some more Graphics Card articles from around the web:

Graphics Cards

EVGA Shows Off New High Bandwidth "Pro SLI Bridge HB" Bridges

Subject: Graphics Cards | June 22, 2016 - 02:40 AM |
Tagged: SLI HB, nvidia, EVGA SLI HB

Earlier this month we reported that EVGA would be producing its own version of Nvidia's SLI High Bandwidth bridges (aka SLI HB). Today, the company unveiled all the details on its new bridges that we did not know previously, particularly pricing and what the connectors look like.

EVGA is calling the new SLI HB bridges the EVGA Pro SLI HB Bridge and it will be available in several sizes to accommodate your particular card spacing. Note that the 0 slot, 1 slot, 2 slot, and 4 slot spacing bridges are all for two graphics card setups; you will not be able to use these bridges for Tri SLI or Quad SLI setups. While Nvidia did not show the underside of the HB bridges when it first announced them alongside the GTX 1080 graphics card, thanks to EVGA you can finally see what the connectors look like.

EVGA Pro SLI HB Bridge.jpg

As many surmised, the new high bandwidth bridges use both fingers of the SLI connectors on each card to connect the two cards together. Previously (using the old-style SLI bridges), it was possible to connect card A to card B using one set of connectors and Card B to Card C using the second set of connectors for example. Now, you are limited to two card multi-GPU setups. That is the downside; however, the upside is that the HB bridges promise to deliver all of the necessary bandwidth to allow for high speed 4K and NVIDIA Surround display setups. While you will not necessarily see higher frame rates, the HB bridges should allow for improved frame times which will mean smoother gameplay on those very high resolution monitors!

sli_rgb_full_animated.gif

The new SLI bridges are all black with an EVGA logo in the middle that is backlit by an LED. Users are able to use a switch along the bottom edge of the pcb to select from red, green, blue, and white LED colors. In my opinion these bridges look a lot better than the Nvidia SLI HB bridge renders from our computex story (hehe).

Now, as for pricing: EVGA is pricing its SLI HB bridges at $39.99 with the 2 slot spacing and 4 slot spacing bridges available now and the 0 slot and 1 slot spaced bridges set to be available soon (you can sign up to be notified when they are available for purchase). Hopefully reviews will be updated shortly around the net with the new bridges to see what impact they really have on multi-GPU gaming performance (or if they will just be better looking alternatives to the older LED bridges or ribbon bridges)!

Also read: 

Source: EVGA

Fermi, Kepler, Maxwell, and Pascal Comparison Benchmarks

Subject: Graphics Cards | June 21, 2016 - 05:22 PM |
Tagged: nvidia, fermi, kepler, maxwell, pascal, gf100, gf110, GK104, gk110, GM204, gm200, GP104

Techspot published an article that compared eight GPUs across six, high-end dies in NVIDIA's last four architectures: Fermi to Pascal. Average frame rates were listed across nine games, each measured at three resolutions:1366x768 (~720p HD), 1920x1080 (1080p FHD), and 2560x1600 (~1440p QHD).

nvidia-2016-dreamhack-1080-stockphoto.png

The results are interesting. Comparing GP104 to GF100, mainstream Pascal is typically on the order of four times faster than big Fermi. Over that time, we've had three full generational leaps in fabrication technology, leading to over twice the number of transistors packed into a die that is almost half the size. It does, however, show that prices have remained relatively constant, except that the GTX 1080 is sort-of priced in the x80 Ti category despite the die size placing it in the non-Ti class. (They list the 1080 at $600, but you can't really find anything outside the $650-700 USD range).

It would be interesting to see this data set compared against AMD. It's informative for an NVIDIA-only article, though.

Source: Techspot

Windows 10 versus Ubuntu 16.04 versus NVIDIA versus AMD

Subject: Graphics Cards | June 20, 2016 - 04:11 PM |
Tagged: windows 10, ubuntu, R9 Fury, nvidia, linux, GTX1070, amd

Phoronix wanted to test out how the new GTX 1070 and the R9 Fury compare on Ubuntu with new drivers and patches, as well as contrasting how they perform on Windows 10.  There are two separate articles as the focus is not old silicon versus new but the performance comparison between the two operating systems.  AMD was tested with the Crimson Edition 16.6.1 driver, AMDGPU-PRO Beta 2 (16.20.3) driver as well as Mesa 12.1-dev.  There were interesting differences between the tested games as some would only support one of the two Linux drivers.  The performance also varies based on the game engine, with some coming out in ties, others seeing Windows 10 pull ahead and even some cases where your performance on Linux was significantly better.

NVIDIA's GTX 1080 and 1070 were tested using the 368.39 driver release for Windows and the 367.27 driver for Ubuntu.  Again we see mixed results, depending on the game Linux performance might actually beat out Windows, especially if OpenGL is an option. 

Check out both reviews to see what performance you can expect from your GPU when gaming under Linux.

image.php_.jpg

"Yesterday I published some Windows 10 vs. Ubuntu 16.04 Linux gaming benchmarks using the GeForce GTX 1070 and GTX 1080 graphics cards. Those numbers were interesting with the NVIDIA proprietary driver but for benchmarking this weekend are Windows 10 results with Radeon Software compared to Ubuntu 16.04 running the new AMDGPU-PRO hybrid driver as well as the latest Git code for a pure open-source driver stack."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

NVIDIA Announces PCIe Versions of Tesla P100

Subject: Graphics Cards | June 20, 2016 - 01:57 PM |
Tagged: tesla, pascal, nvidia, GP100

GP100, the “Big Pascal” chip that was announced at GTC, will be coming to PCIe for enterprise and supercomputer customers in Q4 2016. Previously, it was only announced using NVIDIA's proprietary connection. In fact, they also gave themselves some lead time with their first-party DGX-1 system, which retails for $129,000 USD, although we expect that was more for yield reasons. Josh calculated that each GPU in that system is worth more than the full wafer that its die was manufactured on.

nvidia-2016-gp100tesla.jpg

This brings us to the PCIe versions. Interestingly, they have been down-binned from the NVLink version. The boost clock has been dropped to 1300 MHz, from 1480 MHz, although that is matched with a slightly lower TDP (250W versus the NVLink's 300W). This lowers the FP16 performance to 18.7 TFLOPs, down from 21.2, FP32 performance to 9.3 TFLOPs, down from 10.6, and FP64 performance to 4.7 TFLOPs, down from 5.3. This is where we get to the question: did NVIDIA reduce the clocks to hit a 250W TDP and be compatible with the passive cooling technology that previous Tesla cards utilize, or were the clocks dropped to increase yield?

They are also providing a 12GB version of the PCIe Tesla P100. I didn't realize that GPU vendors could selectively disable HBM2 stacks, but NVIDIA disabled 4GB of memory, which also dropped the bus width to 3072-bit. You would think that the simplicity of the circuit would want to divide work in a power-of-two fashion, but, knowing that they can, it makes me wonder why they did. Again, my first reaction is to question GP100 yield, but you wouldn't think that HBM, being such a small part of the die, is something that they can reclaim a lot of chips by disabling a chunk, right? That is, unless the HBM2 stacks themselves have yield issues -- which would be interesting.

There is also still no word on a 32GB version. Samsung claimed the memory technology, 8GB stacks of HBM2, would be ready for products in Q4 2016 or early 2017. We'll need to wait and see where, when, and why it will appear.

Source: NVIDIA