NVIDIA Releases GeForce Drivers for Far Cry Primal, Gears of War: Ultimate Edition

Subject: Graphics Cards | March 2, 2016 - 10:30 PM |
Tagged: nvidia, geforce, game ready, 362.00 WHQL

The new Far Cry game is out (Far Cry Primal), and for NVIDIA graphics card owners this means a new GeForce Game Ready driver. The 362.00 WHQL certified driver provides “performance optimizations and a SLI profile” for the new game, is now available via GeForce Experience, as well as the manual driver download page.

FC_Primal.jpg

(Image credit: Ubisoft)

The 362.00 WHQL driver also supports the new Gears of War: Ultimate Edition, which is a remastered version of 2007 PC version of the game that includes Windows 10 only enhancements such as 4k resolution support and unlocked frame rates. (Why these "need" to be Windows 10 exclusives can be explained by checking the name of the game’s publisher: Microsoft Studios.)

large_prison2.jpg

(Image credit: Microsoft)

Here’s a list of what’s new in version 362.00 of the driver:

Gaming Technology

  • Added Beta support on GeForce GTX GPUs for external graphics over Thunderbolt 3. GPUs supported include all GTX 900 series, Titan X, and GeForce GTX 750 and 750Ti.

Fermi GPUs:

  • As of Windows 10 November Update, Fermi GPUs now use WDDM 2.0 in single GPU configurations.

For multi-GPU configurations, WDDM usage is as follows:

  • In non-SLI multi-GPU configurations, Fermi GPUs use WDDM 2.0. This includes configurations where a Fermi GPU is used with Kepler or Maxwell GPUs.
  • In SLI mode, Fermi GPUs still use WDDM 1.3. Application SLI Profiles

Added or updated the following SLI profiles:

  • Assassin's Creed Syndicate - SLI profile changed (with driver code as well) to make the application scale better
  • Bless - DirectX 9 SLI profile added, SLI set to SLI-Single
  • DayZ - SLI AA and NVIDIA Control Panel AA enhance disabled
  • Dungeon Defenders 2 - DirectX 9 SLI profile added
  • Elite Dangerous - 64-bit EXE added
  • Hard West - DirectX 11 SLI profile added
  • Metal Gear Solid V: The Phantom Pain - multiplayer EXE added to profile
  • Need for Speed - profile EXEs updated to support trial version of the game
  • Plants vs Zombies Garden Warfare 2 - SLI profile added
  • Rise of the Tomb Raider - profile added
  • Sebastien Loeb Rally Evo - profile updated to match latest app behavior
  • Tom Clancy's Rainbow Six: Siege - profile updated to match latest app behavior
  • Tom Clancy's The Division - profile added
  • XCOM 2 - SLI profile added (including necessary code change)

release_notes.png

The "beta support on GeForce GTX GPUs for external graphics over Thunderbolt 3" is certainly interesting addition, and one that could eventually lead to external solutions for notebooks, coming on the heels of AMD teasing their own standardization of external GPUs.

The full release 361 (GeForce 362.00) notes can be viewed here (warning: PDF).

Source: NVIDIA
Author:
Manufacturer: Microsoft

Things are about to get...complicated

Earlier this week, the team behind Ashes of the Singularity released an updated version of its early access game, which updated its features and capabilities. With support for DirectX 11 and DirectX 12, and adding in multiple graphics card support, the game featured a benchmark mode that got quite a lot of attention. We saw stories based on that software posted by Anandtech, Guru3D and ExtremeTech, all of which had varying views on the advantages of one GPU or another.

That isn’t the focus of my editorial here today, though.

Shortly after the initial release, a discussion began around results from the Guru3D story that measured frame time consistency and smoothness with FCAT, a capture based testing methodology much like the Frame Rating process we have here at PC Perspective. In that post on ExtremeTech, Joel Hruska claims that the results and conclusion from Guru3D are wrong because the FCAT capture methods make assumptions on the output matching what the user experience feels like.  Maybe everyone is wrong?

First a bit of background: I have been working with Oxide and the Ashes of the Singularity benchmark for a couple of weeks, hoping to get a story that I was happy with and felt was complete, before having to head out the door to Barcelona for the Mobile World Congress. That didn’t happen – such is life with an 8-month old. But, in my time with the benchmark, I found a couple of things that were very interesting, even concerning, that I was working through with the developers.

vsyncoff.jpg

FCAT overlay as part of the Ashes benchmark

First, the initial implementation of the FCAT overlay, which Oxide should be PRAISED for including since we don’t have and likely won’t have a DX12 universal variant of, was implemented incorrectly, with duplication of color swatches that made the results from capture-based testing inaccurate. I don’t know if Guru3D used that version to do its FCAT testing, but I was able to get some updated EXEs of the game through the developer in order to the overlay working correctly. Once that was corrected, I found yet another problem: an issue of frame presentation order on NVIDIA GPUs that likely has to do with asynchronous shaders. Whether that issue is on the NVIDIA driver side or the game engine side is still being investigated by Oxide, but it’s interesting to note that this problem couldn’t have been found without a proper FCAT implementation.

With all of that under the bridge, I set out to benchmark this latest version of Ashes and DX12 to measure performance across a range of AMD and NVIDIA hardware. The data showed some abnormalities, though. Some results just didn’t make sense in the context of what I was seeing in the game and what the overlay results were indicating. It appeared that Vsync (vertical sync) was working differently than I had seen with any other game on the PC.

For the NVIDIA platform, tested using a GTX 980 Ti, the game seemingly randomly starts up with Vsync on or off, with no clear indicator of what was causing it, despite the in-game settings being set how I wanted them. But the Frame Rating capture data was still working as I expected – just because Vsync is enabled doesn’t mean you can look at the results in capture formats. I have written stories on what Vsync enabled captured data looks like and what it means as far back as April 2013. Obviously, to get the best and most relevant data from Frame Rating, setting vertical sync off is ideal. Running into more frustration than answers, I moved over to an AMD platform.

Continue reading PC Gaming Shakeup: Ashes of the Singularity, DX12 and the Microsoft Store!!

JPR Notes Huge Increase in Enthusiast AIBs for 2015

Subject: Graphics Cards | February 29, 2016 - 07:06 PM |
Tagged: nvidia, amd, AIB, pc gaming

Jon Peddie Research, which is market analysis firm that specializes in PC hardware, has compiled another report about add-in board (AIB) sales. There's a few interesting aspects to this report. First, shipments of enthusiast AIBs (ie: discrete GPUs) are up, not a handful of percent, but a whole two-fold. Second, AMD's GPU market share climbed once again, from 18.8% up to 21.1%.

jpr-aib-february-2016.png

This image seems contradict their report, which claims the orange line rose from 44 million in 2014 to 50 million in 2015. I'm not sure where the error is, so I didn't mention it in the news post.
Image Credit: JPR

The report claims that neither AMD nor NVIDIA released a “killer new AIB in 2015.” That... depends on how you look at it. They're clearly referring to upper mainstream, which sit just below the flagship and contribute to a large chunk of enthusiast sales. If they were including the flagship, then they ignored the Titan X, 980 Ti, and Fury line of GPUs, which would just be silly. Since they were counting shipped units, though, it makes sense to neglect those SKUs because they are priced way above the inflection point in actual adoption.

jpr-aib-february-2016-table.png

Image Credit: JPR

But that's not the only “well... sort-of” with JPR's statement. Unlike most generations, the GTX 970 and 980 launched late in 2014, rather than their usual Spring-ish cadence. Apart from the GeForce GTX 580, this trend has been around since the GeForce 9000-series. As such, these 2014 launches could have similar influence as another year's early-2015 product line. Add a bit of VR hype, and actual common knowledge that consoles are lower powered than PCs this generation, and you can see these numbers make a little more sense.

Even still, a 100% increase in enthusiast AIB shipments is quite interesting. This doesn't only mean that game developers can target higher-end hardware. The same hardware to consume content can be used to create it, which boosts both sides of the artist / viewer conversation in art. Beyond its benefits to society, this could snowball into more GPU adoption going forward.

Source: JPR

Podcast #387 - ASUS PB328Q, Samsung 750 EVO SSD, the release of Vulkan and more!

Subject: General Tech | February 18, 2016 - 07:16 PM |
Tagged: x16 LTE, vulkan, video, ssd, Samsung, qualcomm, podcast, pb328q, opengl, nvidia, micron, Khronos, gtx 950, asus, apple, 840 evo, 750ti, 750 evo, 3d nand

PC Perspective Podcast #387 - 02/18/2016

Join us this week as we discuss the ASUS PB328Q, Samsung 750 EVO SSD, the release of Vulkan and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Program length: 1:34:18

  1. Week in Review:
  2. 0:35:00 This episode of the PC Perspective Podcast is brought to you by Audible, the world's leading provider of audiobooks with more than 180,000 downloadable titles across all types of literature including fiction, nonfiction, and periodicals. For your free audiobook, go to audible.com/pcper
  3. News items of interest:
  4. 1:07:00 This episode of PC Perspective Podcast is brought to you by Braintree. Even the best mobile app won’t work without the right payments API. That’s where the Braintree v.0 SDK comes in. One amazingly simple integration gives you every way to pay. Try out the sandbox and see for yourself at braintree­payments.com/pcper
  5. Hardware/Software Picks of the Week
  6. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

More Tomb Raiding and a new bundle from NVIDIA

Subject: General Tech | February 17, 2016 - 06:47 PM |
Tagged: gaming, tomb raider, nvidia, bundle, the division

The benchmarks that Ryan ran on the new Tomb Raider game were popular enough that it seems worth giving you a second opinion, in this case from [H]ard|OCP.  They use version 1.0 build 610.1_64 of the game, along with Crimson Edition 16.1.1 Hotfix and GeForce 361.75 during their testing.  For 1080p performance they tested the R9 380X 4GB against the GTX 960 4GB which saw AMD's card come out on top with similar results when testing the R9 390 against the GeForce GTX 970 at 1440p.  At the top end the R9 390X and GTX 980 performed similarly in single GPU configurations with AMD beating NVIDIA in multi GPU tests.  They also tried the TitanX, Fury and FuryX in single GPU configurations at 4k ... it did not go well.

1455189919EDyKUcGV8E_1_1.gif

Also worth noting is the new bundle from NVIDIA, if you pick up a GeForce GTX 970 or better GPU or purchase a notebook with a GeForce GTX 970M, 980M or 980 the you are eligible for a free copy of The Division.  Click here to redeem your code or to find out how to get one.

tom-clancys-header-gf-glp.png

"A new Tomb Raider game is out, Rise of the Tomb Raider. We take RoTR and find out how it performs and compares on no less than 14 of today's latest video card offerings from the AMD and NVIDIA GPU lineups from top-to-bottom using the latest drivers and game patch v1.0 build 610.1_64."

Here is some more Tech News from around the web:

Gaming

Source: [H]ard|OCP

Report: NVIDIA Working on Another GeForce GTX 950 GPU

Subject: Graphics Cards | February 16, 2016 - 05:01 PM |
Tagged: rumor, report, nvidia, Maxwell 2.0, GTX 950 SE, GTX 950 LP, gtx 950, gtx 750, graphics card, gpu

A report from VideoCardz.com claims that NVIDIA is working on another GTX 950 graphics card, but not the 950 Ti you might have expected.

gtx950.jpg

Reference GTX 950 (Image credit: NVIDIA)

While the GTX 750 Ti was succeeded by the GTX 950 in August of last year, the higher specs for this new GPU came at the cost of a higher TDP (90W vs. 60W). This new rumored GTX 950, which might be called either 950 SE or 950 LP according to the report, would be a lower power version of the GTX 950, and would actually have a lot more in common with the outgoing GTX 750 Ti than the plain GTX 750 as we can see from this chart:

chart.png

(Image credit: VideoCardz)

As you can see the GTX 750 Ti is based on GM107 (Maxwell 1.0) and has 640 CUDA cores, 40 TUs, 16 ROPs, and it operates at 1020 MHz Base/1085 MHz Boost clocks. The reported specs of this new GTX 950 SE/LP would be nearly identical, though based on GM206 (Maxwell 2.0) and offering greater memory bandwidth (and slightly higher power consumption).

The VideoCardz report was sourced from Expreview, which claimed that this GTX 950 SE/LP product would arrive next month at some point. This report is a little more vague than some of the rumors we see, but it could very well be that NVIDIA has a planned replacement for the remaining Maxwell 1.0 products on the market. I would have personally expected to see a"Ti” product before any “LE/LP” version of the GTX 950, and this reported name seems more like an OEM product than a retail part. We will have to wait and see if this report is accurate.

Source: VideoCardz

Podcast #386 - Logitech G810, Phanteks Enthoo EVOLV ITX, GTX 980 Ti VR Edition and more!

Subject: General Tech | February 11, 2016 - 05:27 PM |
Tagged: vr edition, video, UMC, ue4, podcast, phanteks, nvidia, logitech, GTX 980 Ti, g810, evga, enthoo evolv itx, asrtock, arm, amd, 28HPCU

PC Perspective Podcast #386 - 02/10/2016

Join us this week as we discuss the Logitech G810, Phanteks Enthoo EVOLV ITX, GTX 980 Ti VR Edition and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Program length: 1:30:34

  1. Week in Review:
  2. 0:36:45 This week’s podcast is brought to you by Casper. Use code PCPER at checkout for $50 towards your order!
  3. News items of interest:
  4. Hardware/Software Picks of the Week
  5. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Author:
Manufacturer: Various

Early testing for higher end GPUs

UPDATE 2/5/16: Nixxes released a new version of Rise of the Tomb Raider today with some significant changes. I have added another page at the end of this story that looks at results with the new version of the game, a new AMD driver and I've also included some SLI and CrossFire results.

I will fully admit to being jaded by the industry on many occasions. I love my PC games and I love hardware but it takes a lot for me to get genuinely excited about anything. After hearing game reviewers talk up the newest installment of the Tomb Raider franchise, Rise of the Tomb Raider, since it's release on the Xbox One last year, I've been waiting for its PC release to give it a shot with real hardware. As you'll see in the screenshots and video in this story, the game doesn't appear to disappoint.

rotr-screen1.jpg

Rise of the Tomb Raider takes the exploration and "tomb raiding" aspects that made the first games in the series successful and applies them to the visual quality and character design brought in with the reboot of the series a couple years back. The result is a PC game that looks stunning at any resolution, but even more so in 4K, that pushes your hardware to its limits. For single GPU performance, even the GTX 980 Ti and Fury X struggle to keep their heads above water.

In this short article we'll look at the performance of Rise of the Tomb Raider with a handful of GPUs, leaning towards the high end of the product stack, and offer up my view on whether each hardware vendor is living up to expectations.

Continue reading our look at GPU performance in Rise of the Tomb Raider!!

NVIDIA Stops SHIELD Android 6.0 Update After Wi-Fi Issues

Subject: Mobile | February 4, 2016 - 02:39 PM |
Tagged: wi-fi, shield tablet, shield, ota update, nvidia, android 6.0

NVIDIA has pulled the Android 6.0 OTA update for the original SHEILD (pre-K1) tablet after users experienced wi-fi connection issues. A post on NVIDIA's official forums explains:

"We have temporarily turned off the OTA update until we understand why a few users are losing WiFi connection after updating their tablet to OTA 4.0."

shield.jpg

(Image: Android Police)

The post is authored by Manuel Guzman of NVIDIA Customer Care, and includes a list of potential fixes:

  • Reboot your tablet 2-3 times. If this fails, power cycle your tablet 3-4 times (not reboot but complete power off). If this does not work, charge your tablet to 100% and attempt again a couple of times or so.
  • Factory reset your tablet. Make sure you backup any important files before you perform this step.
  • A couple of users reporting their WiFi coming back after leaving their tablet powered off for a few hours. Try leaving your tablet powered off for a few hours and then turn the device back on.

Users who still have issues connecting are asked to navigate to the Advanced W-Fi page on their tablet, and then to "take a screenshot and email the picture to driverfeedback@nvidia.com".

GDDR5X Memory Standard Gets Official with JEDEC

Subject: Graphics Cards, Memory | January 22, 2016 - 04:08 PM |
Tagged: Polaris, pascal, nvidia, jedec, gddr5x, GDDR5, amd

Though information about the technology has been making rounds over the last several weeks, GDDR5X technology finally gets official with an announcement from JEDEC this morning. The JEDEC Solid State Foundation is, as Wikipedia tells us, an "independent semiconductor engineering trade organization and standardization body" that is responsible for creating memory standards. Getting the official nod from the org means we are likely to see implementations of GDDR5X in the near future.

The press release is short and sweet. Take a look.

ARLINGTON, Va., USA – JANUARY 21, 2016 –JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of JESD232 Graphics Double Data Rate (GDDR5X) SGRAM.  Available for free download from the JEDEC website, the new memory standard is designed to satisfy the increasing need for more memory bandwidth in graphics, gaming, compute, and networking applications.

Derived from the widely adopted GDDR5 SGRAM JEDEC standard, GDDR5X specifies key elements related to the design and operability of memory chips for applications requiring very high memory bandwidth.  With the intent to address the needs of high-performance applications demanding ever higher data rates, GDDR5X  is targeting data rates of 10 to 14 Gb/s, a 2X increase over GDDR5.  In order to allow a smooth transition from GDDR5, GDDR5X utilizes the same, proven pseudo open drain (POD) signaling as GDDR5.

“GDDR5X represents a significant leap forward for high end GPU design,” said Mian Quddus, JEDEC Board of Directors Chairman.  “Its performance improvements over the prior standard will help enable the next generation of graphics and other high-performance applications.”

JEDEC claims that by using the same signaling type as GDDR5 but it is able to double the per-pin data rate to 10-14 Gb/s. In fact, based on leaked slides about GDDR5X from October, JEDEC actually calls GDDR5X an extension to GDDR5, not a new standard. How does GDDR5X reach these new speeds? By doubling the prefech from 32 bytes to 64 bytes. This will require a redesign of the memory controller for any processor that wants to integrate it. 

gddr5x.jpg

Image source: VR-Zone.com

As for usable bandwidth, though information isn't quoted directly, it would likely see a much lower increase than we are seeing in the per-pin statements from the press release. Because the memory bus width would remain unchanged, and GDDR5X just grabs twice the chunk sizes in prefetch, we should expect an incremental change. No mention of power efficiency is mentioned either and that was one of the driving factors in the development of HBM.

07-bwperwatt.jpg

Performance efficiency graph from AMD's HBM presentation

I am excited about any improvement in memory technology that will increase GPU performance, but I can tell you that from my conversations with both AMD and NVIDIA, no one appears to be jumping at the chance to integrate GDDR5X into upcoming graphics cards. That doesn't mean it won't happen with some version of Polaris or Pascal, but it seems that there may be concerns other than bandwidth that keep it from taking hold. 

Source: JEDEC