NVIDIA Releases 364.47 WHQL Drivers... with Vulkan

Subject: Graphics Cards | March 7, 2016 - 06:15 PM |
Tagged: vulkan, nvidia, graphics drivers, game ready

This new driver for NVIDIA brings Vulkan support to their current, supported branch. This is particularly interesting for me, because the Vulkan branch used to pre-date fixes for Adobe Creative Cloud, which meant that things like “Export As...” in Photoshop CC didn't work and After Effects CC would crash. They are also WHQL-certified, and they roll in all of the “Game Ready” fixes and optimizations that were released since ~October, which would be mostly new for Vulkan's branch.

scott-multi-monitor-nvidia.png

... This is going to be annoying to temporarily disable...

Speaking of which, GeForce Game Ready 364.47 drivers is classified as “Game Ready” itself. The four titles optimized with this release are: Tom Clancy's The Division, Need For Speed, Hitman, and Ashes of the Singularity. If you are interested in playing those games, then this driver is what NVIDIA recommends that you use.

Note that an installation bug has been reported, however. When installing with multiple monitors, NVIDIA suggests that you disable all but one during the setup process, but you can safely re-enable them after. For me, with four monitors and a fairly meticulous desktop icon layout, this was highly annoying, but something I've had to deal with over time (especially the last two, beta Vulkan drivers). It's probably a good idea to close all applications and screenshot your icons before running the installer.

Source: NVIDIA

New ASUS GeForce GTX 950 2G Requires No PCIe Power

Subject: Graphics Cards | March 4, 2016 - 04:48 PM |
Tagged: PCIe power, PCI Express, nvidia, GTX 950 2G, gtx 950, graphics card, gpu, geforce, asus, 75W

ASUS has released a new version of the GTX 950 called the GTX 950 2G, and the interesting part isn't what's been added, but what was taken away; namely, the PCIe power requirement.

gtx9502g_1.jpeg

When NVIDIA announced the GTX 950 (which Ryan reviewed here) it carried a TDP of 90W, which prevented it from running without a PCIe power connector. The GTX 950 was (seemingly) the replacement for the GTX 750, which didn't require anything beyond motherboard power via the PCIe slot, and the same held true for the more powerful GTX 750 Ti. Without the need for PCIe power that GTX 750 Ti became our (any many others) default recommendation to turn any PC into a gaming machine (an idea we just happened to cover in depth here).

Here's a look at the specs from ASUS for the GTX 950 2G:

  • Graphics Engine: NVIDIA GeForce GTX 950
  • Interface: PCI Express 3.0
  • Video Memory: GDDR5 2GB
  • CUDA Cores: 768
  • Memory Clock: 6610 MHz
  • Memory Interface: 128-bit
  • Engine Clock
    • Gaming Mode (Default) - GPU Boost Clock : 1190 MHZ , GPU Base Clock : 1026 MHz
    • OC Mode - GPU Boost Clock : 1228 MHZ , GPU Base Clock : 1051 MHz
  • Interface: HDMI 2.0, DisplayPort, DVI
  • Power Consumption: Up to 75W, no additional PCIe power required
  • Dimensions: 8.3 x 4.5 x 1.6 inches

gtx9502g_2.jpeg

Whether this model has any relation to the rumored "GTX 950 SE/LP" remains to be seen (and other than power, this card appears to have stock GTX 950 specs), but the option of adding in a GPU without concern over power requirements makes this a very attractive upgrade proposition for older builds or OEM PC's, depending on cost.

The full model number is ASUS GTX950-2G, and a listing is up on Amazon, though seemingly only a placeholder at the moment. (Link removed. The listing was apparently for an existing GTX 950 product.)

Source: ASUS

AMD to Add DirectFlip for DX12, Indicates FreeSync Incompatible with UWP Games

Subject: Graphics Cards | March 3, 2016 - 03:00 PM |
Tagged: uwp, radeon, dx12, amd

AMD's Robert Hallock, frequenter of the PC Perspective live streams and a favorite of the team here, is doing an AMAA on reddit today. While you can find some excellent information and views from Robert in that Q&A session, two particular answers stood out to me.

Asked by user CataclysmZA: Can you comment on the recent developments regarding Ashes of the Singularity and DirectX 12 in PC Perspective and Extremetech's tests? Will changes in AMD's driver to include FlipEx support fix the framerate issues and allow high-refresh monitor owners to enjoy their hardware fully? http://www.pcper.com/reviews/General-Tech/PC-Gaming-Shakeup-Ashes-Singularity-DX12-and-Microsoft-Store

Answer from Robert: We will add DirectFlip support shortly.

Well, there you have it. This is the first official notice I have from AMD that it is in fact its driver that was causing the differences in behavior between Radeon and GeForce cards in Ashes of the Singularity last week. It appears that a new driver will be incoming (sometime) that will enable DirectFlip / FlipEx, allowing exclusive full screen modes in DX12 titles. Some of our fear of the unknown can likely be resolved - huzzah!

vsyncon.jpg

Ashes of the Singularity wouldn't enter exclusive full screen mode on AMD Radeon hardware.

Another quesiton also piqued my interest:

Asked by user CataclysmZA: Can you comment on how FreeSync is affected by the way games sold through the Windows Store run in borderless windowed mode?

Answer from Robert: This article discusses the issue thoroughly. Quote: "games sold through Steam, Origin [and] anywhere else will have the ability to behave with DX12 as they do today with DX11."

While not exactly spelling it out, this answer seems to indicate that for the time being, AMD doesn't think FreeSync will work with Microsoft Store sold games in the forced borderless windowed mode. NVIDIA has stated that G-Sync works in some scenarios with the new Gears of War (a Universal Windows Platform app), but it seems they too have issues. 

As more informaiton continues to come in, from whatever sources we can validate, I'll keep you updated!

Source: reddit

Microsoft Comments on UWP Games and Full Screen Capability

Subject: Graphics Cards | March 3, 2016 - 11:54 AM |
Tagged: uwp, uwa, universal windows platform, microsoft, full screen, dx12, DirectX 12

With all of the debate and discussion that followed the second release of Ashes of the Singularity's DX12 benchmark mode, questions about full screen capabilities on AMD hardware and a debate of the impact the Microsoft Store and Universal Windows Platform would have on PC gaming, we went to the source of the debate to try and get some feedback. Microsoft was willing to talk about the issues that arose from this most recent storm though honestly what it is willing to say on the record today is limited.

When asked specifically about the UWP and PC games made available on the Windows 10 Store, Microsoft reiterated its desire to work with gamers and the community to find what works.

“UWP (Universal Windows Platform) allows developers to create experiences that are easily deployed across all Windows 10 devices, from PCs to tablets to phones to Xbox One. When it comes to a UWP game on Windows 10 PCs, we’re early in our journey. We’re listening to the feedback from the community – multiple GPUs, SLI, crossfire, v-sync, etc. We’re embracing the feedback and working to ensure gamers on Windows 10 have a great experience. We’ll have more to discuss in the coming months.” – a Microsoft spokesperson

It's good to know that Microsoft is listening to the media and gamers and seems willing to make changes based on feedback. It will have to be seen though what of this feedback gets implemented and in what time frame.

ed1.jpg

Universal Windows Platform

One particular fear for some gamers is that Microsoft would attempt to move to the WDDM compositing model not just for games sold in the Windows Store, but for all games that run on the OS. I asked Microsoft directly:

To answer your question, can we assume that those full screen features that work today with DX12 will work in the future as well – yes.

This should ease the worries of people thinking the very worst for Windows and DX12 gaming going forward. As long as DX12 allows for games to enter into an exclusive full screen mode, like the FlipEx option we discussed in a previous story, games sold through Steam, Origin and anywhere else will have the ability to behave with DX12 as they do today with DX11.

windowsstore.jpg

Windows 10 Store

I have some meetings setup with various viewpoints on this debate for GDC in a couple weeks, so expect more then!

NVIDIA Releases GeForce Drivers for Far Cry Primal, Gears of War: Ultimate Edition

Subject: Graphics Cards | March 2, 2016 - 05:30 PM |
Tagged: nvidia, geforce, game ready, 362.00 WHQL

The new Far Cry game is out (Far Cry Primal), and for NVIDIA graphics card owners this means a new GeForce Game Ready driver. The 362.00 WHQL certified driver provides “performance optimizations and a SLI profile” for the new game, is now available via GeForce Experience, as well as the manual driver download page.

FC_Primal.jpg

(Image credit: Ubisoft)

The 362.00 WHQL driver also supports the new Gears of War: Ultimate Edition, which is a remastered version of 2007 PC version of the game that includes Windows 10 only enhancements such as 4k resolution support and unlocked frame rates. (Why these "need" to be Windows 10 exclusives can be explained by checking the name of the game’s publisher: Microsoft Studios.)

large_prison2.jpg

(Image credit: Microsoft)

Here’s a list of what’s new in version 362.00 of the driver:

Gaming Technology

  • Added Beta support on GeForce GTX GPUs for external graphics over Thunderbolt 3. GPUs supported include all GTX 900 series, Titan X, and GeForce GTX 750 and 750Ti.

Fermi GPUs:

  • As of Windows 10 November Update, Fermi GPUs now use WDDM 2.0 in single GPU configurations.

For multi-GPU configurations, WDDM usage is as follows:

  • In non-SLI multi-GPU configurations, Fermi GPUs use WDDM 2.0. This includes configurations where a Fermi GPU is used with Kepler or Maxwell GPUs.
  • In SLI mode, Fermi GPUs still use WDDM 1.3. Application SLI Profiles

Added or updated the following SLI profiles:

  • Assassin's Creed Syndicate - SLI profile changed (with driver code as well) to make the application scale better
  • Bless - DirectX 9 SLI profile added, SLI set to SLI-Single
  • DayZ - SLI AA and NVIDIA Control Panel AA enhance disabled
  • Dungeon Defenders 2 - DirectX 9 SLI profile added
  • Elite Dangerous - 64-bit EXE added
  • Hard West - DirectX 11 SLI profile added
  • Metal Gear Solid V: The Phantom Pain - multiplayer EXE added to profile
  • Need for Speed - profile EXEs updated to support trial version of the game
  • Plants vs Zombies Garden Warfare 2 - SLI profile added
  • Rise of the Tomb Raider - profile added
  • Sebastien Loeb Rally Evo - profile updated to match latest app behavior
  • Tom Clancy's Rainbow Six: Siege - profile updated to match latest app behavior
  • Tom Clancy's The Division - profile added
  • XCOM 2 - SLI profile added (including necessary code change)

release_notes.png

The "beta support on GeForce GTX GPUs for external graphics over Thunderbolt 3" is certainly interesting addition, and one that could eventually lead to external solutions for notebooks, coming on the heels of AMD teasing their own standardization of external GPUs.

The full release 361 (GeForce 362.00) notes can be viewed here (warning: PDF).

Source: NVIDIA

Tesla Motors Hires Peter Bannon of Apple

Subject: Graphics Cards, Processors | February 29, 2016 - 06:48 PM |
Tagged: tesla motors, tesla, SoC, Peter Bannon, Jim Keller

When we found out that Jim Keller has joined Tesla, we were a bit confused. He is highly skilled in processor design, and he moved to a company that does not design processors. Kind of weird, right? There are two possibilities that leap to mind: either he wanted to try something new in life, and Elon Musk hired him for his general management skills, or Tesla wants to get more involved in the production of their SoCs, possibly even designing their own.

tesla-2016-logo2.png

Now Peter Bannon, who was a colleague of Jim Keller at Apple, has been hired by Tesla Motors. Chances are, the both of them were not independently interested in an abrupt career change that led them to the same company. That seems highly unlikely, to say the least. So it appears that Tesla Motors wants experienced chip designers in house. What for? We don't know. This is a lot of talent to just look over the shoulders of NVIDIA and other SoC partners, to make sure they have an upper hand in negotiation. Jim Keller is at Tesla as their “Vice-President of Autopilot Hardware Engineering.” We don't know what Peter Bannon's title will be.

And then, if Tesla Motors does get into creating their own hardware, we wonder what they will do with it. The company has a history of open development and releasing patents (etc.) into the public. That said, SoC design is a highly encumbered field, depending on what they're specifically doing, which we have no idea about.

Source: Eletrek
Author:
Manufacturer: Microsoft

Things are about to get...complicated

Earlier this week, the team behind Ashes of the Singularity released an updated version of its early access game, which updated its features and capabilities. With support for DirectX 11 and DirectX 12, and adding in multiple graphics card support, the game featured a benchmark mode that got quite a lot of attention. We saw stories based on that software posted by Anandtech, Guru3D and ExtremeTech, all of which had varying views on the advantages of one GPU or another.

That isn’t the focus of my editorial here today, though.

Shortly after the initial release, a discussion began around results from the Guru3D story that measured frame time consistency and smoothness with FCAT, a capture based testing methodology much like the Frame Rating process we have here at PC Perspective. In that post on ExtremeTech, Joel Hruska claims that the results and conclusion from Guru3D are wrong because the FCAT capture methods make assumptions on the output matching what the user experience feels like.  Maybe everyone is wrong?

First a bit of background: I have been working with Oxide and the Ashes of the Singularity benchmark for a couple of weeks, hoping to get a story that I was happy with and felt was complete, before having to head out the door to Barcelona for the Mobile World Congress. That didn’t happen – such is life with an 8-month old. But, in my time with the benchmark, I found a couple of things that were very interesting, even concerning, that I was working through with the developers.

vsyncoff.jpg

FCAT overlay as part of the Ashes benchmark

First, the initial implementation of the FCAT overlay, which Oxide should be PRAISED for including since we don’t have and likely won’t have a DX12 universal variant of, was implemented incorrectly, with duplication of color swatches that made the results from capture-based testing inaccurate. I don’t know if Guru3D used that version to do its FCAT testing, but I was able to get some updated EXEs of the game through the developer in order to the overlay working correctly. Once that was corrected, I found yet another problem: an issue of frame presentation order on NVIDIA GPUs that likely has to do with asynchronous shaders. Whether that issue is on the NVIDIA driver side or the game engine side is still being investigated by Oxide, but it’s interesting to note that this problem couldn’t have been found without a proper FCAT implementation.

With all of that under the bridge, I set out to benchmark this latest version of Ashes and DX12 to measure performance across a range of AMD and NVIDIA hardware. The data showed some abnormalities, though. Some results just didn’t make sense in the context of what I was seeing in the game and what the overlay results were indicating. It appeared that Vsync (vertical sync) was working differently than I had seen with any other game on the PC.

For the NVIDIA platform, tested using a GTX 980 Ti, the game seemingly randomly starts up with Vsync on or off, with no clear indicator of what was causing it, despite the in-game settings being set how I wanted them. But the Frame Rating capture data was still working as I expected – just because Vsync is enabled doesn’t mean you can look at the results in capture formats. I have written stories on what Vsync enabled captured data looks like and what it means as far back as April 2013. Obviously, to get the best and most relevant data from Frame Rating, setting vertical sync off is ideal. Running into more frustration than answers, I moved over to an AMD platform.

Continue reading PC Gaming Shakeup: Ashes of the Singularity, DX12 and the Microsoft Store!!

JPR Notes Huge Increase in Enthusiast AIBs for 2015

Subject: Graphics Cards | February 29, 2016 - 02:06 PM |
Tagged: nvidia, amd, AIB, pc gaming

Jon Peddie Research, which is market analysis firm that specializes in PC hardware, has compiled another report about add-in board (AIB) sales. There's a few interesting aspects to this report. First, shipments of enthusiast AIBs (ie: discrete GPUs) are up, not a handful of percent, but a whole two-fold. Second, AMD's GPU market share climbed once again, from 18.8% up to 21.1%.

jpr-aib-february-2016.png

This image seems contradict their report, which claims the orange line rose from 44 million in 2014 to 50 million in 2015. I'm not sure where the error is, so I didn't mention it in the news post.
Image Credit: JPR

The report claims that neither AMD nor NVIDIA released a “killer new AIB in 2015.” That... depends on how you look at it. They're clearly referring to upper mainstream, which sit just below the flagship and contribute to a large chunk of enthusiast sales. If they were including the flagship, then they ignored the Titan X, 980 Ti, and Fury line of GPUs, which would just be silly. Since they were counting shipped units, though, it makes sense to neglect those SKUs because they are priced way above the inflection point in actual adoption.

jpr-aib-february-2016-table.png

Image Credit: JPR

But that's not the only “well... sort-of” with JPR's statement. Unlike most generations, the GTX 970 and 980 launched late in 2014, rather than their usual Spring-ish cadence. Apart from the GeForce GTX 580, this trend has been around since the GeForce 9000-series. As such, these 2014 launches could have similar influence as another year's early-2015 product line. Add a bit of VR hype, and actual common knowledge that consoles are lower powered than PCs this generation, and you can see these numbers make a little more sense.

Even still, a 100% increase in enthusiast AIB shipments is quite interesting. This doesn't only mean that game developers can target higher-end hardware. The same hardware to consume content can be used to create it, which boosts both sides of the artist / viewer conversation in art. Beyond its benefits to society, this could snowball into more GPU adoption going forward.

Source: JPR

MWC 16: Imagination Technologies Ray Tracing Accelerator

Subject: Graphics Cards, Mobile, Shows and Expos | February 23, 2016 - 08:46 PM |
Tagged: raytracing, ray tracing, PowerVR, mwc 16, MWC, Imagination Technologies

For the last couple of years, Imagination Technologies has been pushing hardware-accelerated ray tracing. One of the major problems in computer graphics is knowing what geometry and material corresponds to a specific pixel on the screen. Several methods exists, although typical GPUs crush a 3D scene into the virtual camera's 2D space and do a point-in-triangle test on it. Once they know where in the triangle the pixel is, if it is in the triangle, it can be colored by a pixel shader.

imagtech-2016-PowerVR-GR6500-GPU-PowerVR-Wizard-GPUs.png

Another method is casting light rays into the scene, and assigning a color based on the material that it lands on. This is ray tracing, and it has a few advantages. First, it is much easier to handle reflections, transparency, shadows, and other effects where information is required beyond what the affected geometry and its material provides. There are usually ways around this, without resorting to ray tracing, but they each have their own trade-offs. Second, it can be more efficient for certain data sets. Rasterization, since it's based around a “where in a triangle is this point” algorithm, needs geometry to be made up of polygons.

It also has the appeal of being what the real world sort-of does (assuming we don't need to model Gaussian beams). That doesn't necessarily mean anything, though.

At Mobile World Congress, Imagination Technologies once again showed off their ray tracing hardware, embodied in the PowerVR GR6500 GPU. This graphics processor has dedicated circuitry to calculate rays, and they use it in a couple of different ways. They presented several demos that modified Unity 5 to take advantage of their ray tracing hardware. One particularly interesting one was their quick, seven second video that added ray traced reflections atop an otherwise rasterized scene. It was a little too smooth, creating reflections that were too glossy, but that could probably be downplayed in the material ((Update: Feb 24th @ 5pm Car paint is actually that glossy. It's a different issue). Back when I was working on a GPU-accelerated software renderer, before Mantle, Vulkan, and DirectX 12, I was hoping to use OpenCL-based ray traced highlights on idle GPUs, if I didn't have any other purposes for it. Now though, those can be exposed to graphics APIs directly, so they might not be so idle.

The downside of dedicated ray tracing hardware is that, well, the die area could have been used for something else. Extra shaders, for compute, vertex, and material effects, might be more useful in the real world... or maybe not. Add in the fact that fixed-function circuitry already exists for rasterization, and it makes you balance gain for cost.

It could be cool, but it has its trade-offs, like anything else.

Valve Releases SteamVR Performance Test - Is Your Rig Ready?

Subject: Graphics Cards | February 22, 2016 - 06:03 PM |
Tagged: vive, valve, steamvr, steam, rift, performance test, Oculus, htc

Though I am away from my stacks of hardware at the office attending Mobile World Congress in Barcelona, Valve dropped a bomb on us today in the form of a new hardware performance test that gamers can use to determine if they are ready for the SteamVR revolution. The aptly named "SteamVR Performance Test" is a free title available through Steam that any user can download and run to get a report card on their installed hardware. No VR headset required!

And unlike the Oculus Compatibility Checker, the application from Valve runs actual game content to measure your system. Oculus' app only looks at the hardware on your system for certification, not taking into account the performance of your system in any way. (Overclockers and users with Ivy Bridge Core i7 processors have been reporting failed results on the Oculus test for some time.)

screen1.jpg

The SteamVR Performance Test runs a set of scenes from the Aperture Science Robot Repair demo, an experience developed directly for the HTC Vive and one that I was able to run through during CES last month. Valve is using a very interesting new feature called "dynamic fidelity" that adjusts image quality of the game in a way to avoid dropped frames and frame rates under 90 FPS in order to maintain a smooth and comfortable experience for the VR user. Though it is the first time I have seen it used, it sounds similar to what John Carmack did with the id Tech 5 engine, attempting to balance performance on hardware while maintaining a targeted frame rate.

The technology could be a perfect match for VR content where frame rates above or at the 90 FPS target are more important than visual fidelity (in nearly all cases). I am curious to see how Valve may or may not pursue and push this technology in its own games and for the Vive / Rift in general. I have some questions pending with them, so we'll see what they come back with.

fury.png

A result for a Radeon R9 Fury provided by AMD

Valve's test offers a very simple three tiered breakdown for your system: Not Ready, Capable and Ready. For a more detailed explanation you can expand on the data to see metrics like the number of frames you are CPU bound on, frames below the very important 90 FPS mark and how many frames were tested in the run. The Average Fidelity metric is the number that we are reporting below and essentially tells us "how much quality" the test estimates you can run at while maintaining that 90 FPS mark. What else that fidelity result means is still unknown - but again we are trying to find out. The short answer is that the higher that number goes, the better off you are, and the more demanding game content you'll be able to run at acceptable performance levels. At least, according to Valve.

screen2.jpg

Because I am not at the office to run my own tests, I decided to write up this story using results from a third part. That third party is AMD - let the complaining begin. Obviously this does NOT count as independent testing but, in truth, it would be hard to cheat on these results unless you go WAY out of your way to change control panel settings, etc. The demo is self run and AMD detailed the hardware and drivers used in the results.

  • Intel i7-6700K
  • 2x4GB DDR4-2666 RAM
  • Z170 motherboard
  • Radeon Software 16.1.1
  • NVIDIA driver 361.91
  • Win10 64-bit

GPU Score
2x Radeon R9 Nano 11.0
GeForce GTX 980 Ti 11.0
Radeon R9 Fury X 9.6
Radeon R9 Fury 9.2
GeForce GTX 980 8.1
Radeon R9 Nano 8.0
Radeon R9 390X 7.8
Radeon R9 390 7.0
GeForce GTX 970 6.5

These results were provided by AMD in an email to the media. Take that for what you will until we can run our own tests.

First, the GeForce GTX 980 Ti is the highest performing single GPU tested, with a score of 11 - because of course it goes to 11. The same score is reported on the multi-GPU configuration with two Radeon R9 Nanos so clearly we are seeing a ceiling of this version of the SteamVR Performance Test. With a single GPU score of 9.2, that is only a 19% scaling rate, but I think we are limited by the test in this case. Either way, it's great news to see that AMD has affinity multi-GPU up and running, utilizing one GPU for each eye's rendering. (AMD pointed out that users that want to test the multi-GPU implementation will need to add the -multigpu launch option.) I still need to confirm if GeForce cards scale accordingly. UPDATE: Ken at the office ran a quick check with a pair of GeForce GTX 970 cards with the same -multigpu option and saw no scaling improvements. It appears NVIDIA has work to do here.

Moving down the stack, its clear why AMD was so excited to send out these early results. The R9 Fury X and R9 Fury both come out ahead of the GeForce GTX 980 while the R9 Nano, R9 390X and R9 390 result in better scores than NVIDIA's GeForce GTX 970. This comes as no surprise - AMD's Radeon parts tend to offer better performance per dollar when it comes to benchmarks and many games.

badresult2.jpg

There is obviously a lot more to consider than the results this SteamVR Performance Test provides when picking hardware for a VR system, but we are glad to see Valve out in front of the many, many questions that are flooding forums across the web. Is your system ready??

Source: Valve