The great GTX 950 review roundup

Subject: Graphics Cards | August 24, 2015 - 03:43 PM |
Tagged: nvidia, moba, maxwell, gtx 950, GM206, geforce, DOTA 2

It is more fun testing at the high end and the number of MOBA gamers here at PCPer could be described as very sparse, to say the least.  Perhaps you are a MOBA gamer looking to play on a 1080p screen and have less than $200 to invest in a GPU and feel that Ryan somehow missed a benchmark that is important to you.  One of the dozens of reviews linked to below are likely to have covered that game or specific feature which you are looking for.  They also represent the gamut of cards available at launch from a wide variety of vendors, both stock and overclocked models.  If you just want a quick refresher on the specifications and what has happened to the pricing on already released models, The Tech Report has handy tables for you to reference here.


"For most of this summer, much of the excitement in the GPU market has been focused on pricey, high-end products like the Radeon Fury and the GeForce GTX 980 Ti. Today, Nvidia is turning the spotlight back on more affordable graphics cards with the introduction of the GeForce GTX 950, a $159.99 offering that promises to handle the latest games reasonably well at the everyman's resolution of 1080p."

Here are some more Graphics Card articles from around the web:

Graphics Cards

GPU Market Share: NVIDIA Gains in Shrinking Add-in Board Market

Subject: Graphics Cards | August 21, 2015 - 11:30 AM |
Tagged: PC, nvidia, Matrox, jpr, graphics cards, gpu market share, desktop market share, amd, AIB, add in board

While we reported recently on the decline of overall GPU shipments, a new report out of John Peddie Research covers the add-in board segment to give us a look at the desktop graphics card market. So how are the big two (sorry Matrox) doing?

GPU Supplier Market Share This Quarter Market Share Last Quarter Market Share Last Year
AMD 18.0% 22.5% 37.9%
Matrox 0.00% 0.1% 0.1%
NVIDIA 81.9% 77.4% 62.0%

The big news is of course a drop in market share for AMD of 4.5% quarter-to-quarter, and down to just 18% from 37.9% last year. There will be many opinions as to why their share has been dropping in the last year, but it certainly didn't help that the 300-series GPUs are rebrands of 200-series, and the new Fury cards have had very limited availability so far.


The graph from Mercury Research illustrates what is almost a mirror image, with NVIDIA gaining 20% as AMD lost 20%, for a 40% swing in overall share. Ouch. Meanwhile (not pictured) Matrox didn't have a statistically meaningful quarter but still manage to appear on the JPR report with 0.1% market share (somehow) last quarter.

The desktop market isn't actually suffering quite as much as the overall PC market, and specifically the enthusiast market.

"The AIB market has benefited from the enthusiast segment PC growth, which has been partially fueled by recent introductions of exciting new powerful (GPUs). The demand for high-end PCs and associated hardware from the enthusiast and overclocking segments has bucked the downward trend and given AIB vendors a needed prospect to offset declining sales in the mainstream consumer space."

But not all is well considering overall the add-in board attach rate with desktops "has declined from a high of 63% in Q1 2008 to 37% this quarter". This is indicative of the overall trend toward integrated GPUs in the industry with AMD APUs and Intel processor graphics, as illustrated by this graphic from the report.


The year-to-year numbers show an overall drop of 18.8%, and even with their dominant 81.9% market share NVIDIA has still seen their shipments decrease by 12% this quarter. These trends seem to indicate a gloomy future for discrete graphics in the coming years, but for now we in the enthusiast community will continue to keep it afloat. It would certainly be nice to see some gains from AMD soon to keep things interesting, which might help lower prices down from their lofty $400 - $600 mark for flagship cards at the moment.

Manufacturer: NVIDIA

Another Maxwell Iteration

The mainstream end of the graphics card market is about to get a bit more complicated with today’s introduction of the GeForce GTX 950. Based on a slightly cut down GM206 chip, the same used in the GeForce GTX 960 that was released almost 8 months ago, the new GTX 950 will fill a gap in the product stack for NVIDIA, resting right at $160-170 MSRP. Until today that next-down spot from the GTX 960 was filled by the GeForce GTX 750 Ti, the very first iteration of Maxwell (we usually call it Maxwell 1) that came out in February of 2014!

Even though that is a long time to go without refreshing the GTX x50 part of the lineup, NVIDIA was likely hesitant to do so based on the overwhelming success of the GM107 for mainstream gaming. It was low cost, incredibly efficient and didn’t require any external power to run. That led us down the path of upgrading OEM PCs with GTX 750 Ti, an article and video that still gets hundreds of views and dozens of comments a week.


The GTX 950 has some pretty big shoes to fill. I can tell you right now that it uses more power than the GTX 750 Ti, and it requires a 6-pin power connector, but it does so while increasing gaming performance dramatically. The primary competition from AMD is the Radeon R7 370, a Pitcairn GPU that is long in the tooth and missing many of the features that Maxwell provides.

And NVIDIA is taking a secondary angle with the GTX 950 launch –targeting the MOBA players (DOTA 2 in particular) directly and aggressively. With the success of this style of game over the last several years, and the impressive $18M+ purse for the largest DOTA 2 tournament just behind us, there isn’t a better area of PC gaming to be going after today. But are the tweaks and changes to the card and software really going to make a difference for MOBA gamers or is it just marketing fluff?

Let’s dive into everything GeForce GTX 950!

Continue reading our review of the NVIDIA GeForce GTX 950 2GB Graphics Card!!

Gameworks VR, NVIDIA's Direct Driver Mode for the Oculus Rift

Subject: General Tech, Displays | August 13, 2015 - 06:51 PM |
Tagged: nvidia, oculus rift, gameworks vr

The news that Oculus SDK 0.7

would incorporate Direct Driver Mode after the August 20th update is not very old and now NVIDA has announced the availability of the beta version of their GameWorks VR.  As mentioned on this podcast, until now your GPU has treated the Oculus as a secondary monitor but with this update your graphics driver will directly talk to the Oculus as a separate device, which should help greatly with latency and development of the tricks and treats yet to be discovered when programming for this type of interface.


NVIDIA's Gameworks VR, as well as AMD's LiquidVR will provide a platform for developers to program for the Oculus Rift as well as the competeing products from other companies.  The new beta SDK from NVIDIA has been updated to support VR SLI and is compatible with the new 350.60 Game Ready drivers.  Programmers working with the Maxwell architecture will benefit from Multi-Res Shading which should increase the performance of your current programs.  Follow the links if you are interested in developing for Oculus, otherwise wait patiently for the day you can pre-order them.

Source: NVIDIA

Podcast #362 - Benchmarking a Voodoo 3, Flash Media Summit 2015, Skylake Delidding and more!

Subject: General Tech | August 13, 2015 - 01:14 PM |
Tagged: podcast, video, amd, nvidia, GTX 970, Zotac GTX 970 AMP! Extreme Core Edition, dx12, 3dfx, voodoo 3, Intel, SSD 750, NVMe, Samsung, R9 Fury, Fiji, gtx 950

PC Perspective Podcast #362 - 08/13/2015

Join us this week as we discuss Benchmarking a Voodoo 3, Flash Media Summit 2015, Skylake Delidding and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Sebastian Peak

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Is this the GTX 950?

Subject: Graphics Cards | August 11, 2015 - 02:54 PM |
Tagged: rumour, nvidia, gtx 950

Rumours of the impending release of a GTX 950 and perhaps even a GTX 950 Ti continue to spread, most recently at Videocardz who have developed a reputation for this kind of report.  Little is known at this time, the specifications are still unspecified but they have found a page showing a ASUS STRIX GTX 950, with 2GB of memory and a DirectCUII cooler. The prices shown are unlikely to represent the actual retail price, even in Finland where the capture is from.


Also spotted is a PNY GTX 950 retail box which shows us little in the way of details, the power plug is facing away from the camera so we are still unsure how many power plugs will be need./  Videocardz also reiterates their belief from the first leak that the card will 75% of a GM206 Maxwell graphics processor, with 768 CUDA cores and a 128-bit interface.

Source: Videocardz

Overclock any NVIDIA GPU on Desktop and Mobile with a New Utility

Subject: Graphics Cards | August 10, 2015 - 06:14 PM |
Tagged: overclocking, overclock, open source, nvidia, MSI Afterburner, API

An author called "2PKAQWTUQM2Q7DJG" (likely not a real name) has published a fascinating little article today on his/her Wordpress blog entitled, "Overclocking Tools for NVIDIA GPUs Suck. I Made My Own". What it contains is a full account of the process of creating an overclocking tool beyond the constraints of common utilities such as MSI Afterburner.

By probing MSI's OC utility using Ollydbg (an x86 "assembler level analysing debugger") the author was able to track down how Afterburner was working.


“nvapi.dll” definitely gets loaded here using LoadLibrary/GetModuleHandle. We’re on the right track. Now where exactly is that lib used? ... That’s simple, with the program running and the realtime graph disabled (it polls NvAPI constantly adding noise to the mass of API calls). we place a memory breakpoint on the .Text memory segment of the NVapi.dll inside MSI Afterburner’s process... Then we set the sliders in the MSI tool to get some negligible GPU underclock and hit the “apply” button. It breaks inside NvAPI… magic!

After further explaining the process and his/her source code for an overclocking utility, the user goes on to show the finished product in the form of a command line utility.


There is a link to the finished version of this utility at the end of the article, as well as the entire process with all source code. It makes for an interesting read (even for the painfully inept at programming, such as myself), and the provided link to download this mysterious overclocking utility (disguised as a JPG image file, no less) makes it both tempting and a little dubious. Does this really allow overclocking any NVIDIA GPU, including mobile? What could be the harm in trying?? In all seriousness however since some of what was seemingly uncovered in the article is no doubt proprietary, how long will this information be available?

It would probably be wise to follow the link to the Wordpress page ASAP!

Source: Wordpress
Manufacturer: PC Perspective

It's Basically a Function Call for GPUs

Mantle, Vulkan, and DirectX 12 all claim to reduce overhead and provide a staggering increase in “draw calls”. As mentioned in the previous editorial, loading graphics card with tasks will take a drastic change in these new APIs. With DirectX 10 and earlier, applications would assign attributes to (what it is told is) the global state of the graphics card. After everything is configured and bound, one of a few “draw” functions is called, which queues the task in the graphics driver as a “draw call”.

While this suggests that just a single graphics device is to be defined, which we also mentioned in the previous article, it also implies that one thread needs to be the authority. This limitation was known about for a while, and it contributed to the meme that consoles can squeeze all the performance they have, but PCs are “too high level” for that. Microsoft tried to combat this with “Deferred Contexts” in DirectX 11. This feature allows virtual, shadow states to be loaded from secondary threads, which can be appended to the global state, whole. It was a compromise between each thread being able to create its own commands, and the legacy decision to have a single, global state for the GPU.

Some developers experienced gains, while others lost a bit. It didn't live up to expectations.


The paradigm used to load graphics cards is the problem. It doesn't make sense anymore. A developer might not want to draw a primitive with every poke of the GPU. At times, they might want to shove a workload of simple linear algebra through it, while other requests could simply be pushing memory around to set up a later task (or to read the result of a previous one). More importantly, any thread could want to do this to any graphics device.


The new graphics APIs allow developers to submit their tasks quicker and smarter, and it allows the drivers to schedule compatible tasks better, even simultaneously. In fact, the driver's job has been massively simplified altogether. When we tested 3DMark back in March, two interesting things were revealed:

  • Both AMD and NVIDIA are only a two-digit percentage of draw call performance apart
  • Both AMD and NVIDIA saw an order of magnitude increase in draw calls

Read on to see what this means for games and game development.

Upcoming Oculus SDK 0.7 Integrates Direct Driver Mode from AMD and NVIDIA

Subject: Graphics Cards | August 7, 2015 - 10:46 AM |
Tagged: sdk, Oculus, nvidia, direct driver mode, amd

In an email sent out by Oculus this morning, the company has revealed some interesting details  about the upcoming release of the Oculus SDK 0.7 on August 20th. The most interesting change is the introduction of Direct Driver Mode, developed in tandem with both AMD and NVIDIA.


This new version of the SDK will remove the simplistic "Extended Mode" that many users and developers implemented for a quick and dirty way of getting the Rift development kits up and running. However, that implementation had the downside of additional latency, something that Oculus is trying to eliminate completely.

Here is what Oculus wrote about the "Direct Driver Mode" in its email to developers:

Direct Driver Mode is the most robust and reliable solution for interfacing with the Rift to date. Rather than inserting VR functionality between the OS and the graphics driver, headset awareness is added directly to the driver. As a result, Direct Driver Mode avoids many of the latency challenges of Extended Mode and also significantly reduces the number of conflicts between the Oculus SDK and third party applications. Note that Direct Driver Mode requires new drivers from NVIDIA and AMD, particularly for Kepler (GTX 645 or better) and GCN (HD 7730 or better) architectures, respectively.

We have heard NVIDIA and AMD talk about the benefits of direct driver implementations for VR headsets for along time. NVIDIA calls its software implementation GameWorks VR and AMD calls its software support LiquidVR. Both aim to do the same thing - give more direct access to the headset hardware to the developer while offering new ways for faster and lower latency rendering to games.


Both companies have unique features to offer as well, including NVIDIA and it's multi-res shading technology. Check out our interview with NVIDIA on the topic below:

NVIDIA's Tom Petersen came to our offices to talk about GameWorks VR

Other notes in the email include a tentative scheduled release of November for the 1.0 version of the Oculus SDK. But until that version releases, Oculus is only guaranteeing that each new runtime will support the previous version of the SDK. So, when SDK 0.8 is released, you can only guarantee support for it and 0.7. When 0.9 comes out, game developers will need make sure they are at least on SDK 0.8 otherwise they risk incompatibility. Things will be tough for developers in this short window of time, but Oculus claims its necessary to "allow them to more rapidly evolve the software architecture and API." After SDK 1.0 hits, future SDK releases will continue to support 1.0.

Source: Oculus

Podcast #361 - Intel Skylake Core i7-6700K, Logitech G29 Racing Wheel, Lenovo LaVie-Z and more!

Subject: General Tech | August 6, 2015 - 03:04 PM |
Tagged: Z170-A, z170 deluxe, Z170, video, Skylake, podcast, nvidia, maxwell, logitech g29, Lenovo, lavie-z, Intel, gigabyte, asus, 950ti, 6700k

PC Perspective Podcast #361 - 08/06/2015

Join us this week as we discuss the Intel Skylake Core i7-6700K, Logitech G29 Racing Wheel, Lenovo LaVie-Z and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!