Phoronix Looks at NVIDIA's Linux Driver Quality Settings

Subject: Graphics Cards | September 22, 2015 - 09:09 PM |
Tagged: nvidia, linux, graphics drivers

In the NVIDIA driver control panel, there is a slider that controls Performance vs Quality. On Windows, I leave it set to “Let the 3D application decide” and change my 3D settings individually, as needed. I haven't used NVIDIA's control panel on Linux too much, mostly because my laptop is what I usually install Linux on, which runs an AMD GPU, but the UI seems to put a little more weight on it.


Or is that GTux?

Phoronix decided to test how each of these settings affects a few titles, and the only benchmark they bothered reporting is Team Fortress 2. It turns out that other titles see basically zero variance. TF2 saw a difference of 6FPS though, from 115 FPS at High Quality to 121 FPS at Quality. Oddly enough, Performance and High Performance were worse performance than Quality.

To me, this sounds like NVIDIA has basically forgot about the feature. It barely affects any title, the game it changes anything measureable in is from 2007, and it contradicts what the company is doing on other platforms. I predict that Quality is the default, which is the same as Windows (albeit with only 3 choices: “Performance”, “Balanced”, and the default “Quality”). If it is, you probably should just leave it there 24/7 in case NVIDIA has literally not thought about tweaking the other settings. On Windows, it is kind-of redundant with GeForce Experience, anyway.

Final note: Phoronix has only tested the GTX 980. Results may vary elsewhere, but probably don't.

Source: Phoronix
Manufacturer: NVIDIA

Pack a full GTX 980 on the go!

For many years, the idea of a truly mobile gaming system has been attainable if you were willing to pay the premium for high performance components. But anyone that has done research in this field would tell you that though they were named similarly, the mobile GPUs from both AMD and NVIDIA had a tendency to be noticeably slower than their desktop counterparts. A GeForce GTX 970M, for example, only had a CUDA core count that was slightly higher than the desktop GTX 960, and it was 30% lower than the true desktop GTX 970 product. So even though you were getting fantastic mobile performance, there continued to be a dominant position that desktop users held over mobile gamers in PC gaming.

This fall, NVIDIA is changing that with the introduction of the GeForce GTX 980 for gaming notebooks. Notice I did not put an 'M' at the end of that name; it's not an accident. NVIDIA has found a way, through binning and component design, to cram the entirety of a GM204-based Maxwell GTX 980 GPU inside portable gaming notebooks.


The results are impressive and the implications for PC gamers are dramatic. Systems built with the GTX 980 will include the same 2048 CUDA cores, 4GB of GDDR5 running at 7.0 GHz and will run at the same base and typical GPU Boost clocks as the reference GTX 980 cards you can buy today for $499+. And, while you won't find this GPU in anything called a "thin and light", 17-19" gaming laptops do allow for portability of gaming unlike any SFF PC.

So how did they do it? NVIDIA has found a way to get a desktop GPU with a 165 watt TDP into a form factor that has a physical limit of 150 watts (for the MXM module implementations at least) through binning, component selection and improved cooling. Not only that, but there is enough headroom to allow for some desktop-class overclocking of the GTX 980 as well.

Continue reading our preview of the new GTX 980 for notebooks!!

What to use for 1080p on Linux or your future SteamOS machine

Subject: Graphics Cards | September 17, 2015 - 03:34 PM |
Tagged: linux, amd, nvidia

If you are using a 1080p monitor or perhaps even outputting to a large 1080p TV, there is no point in picking up a $500+ GPU as you will not be using the majority of its capabilities.  Phoronix has just done research on what GPU offers you the best value for gaming at that resolution, putting five AMD GPUs from the Radeon R9 270X to the R9 Fury and six NVIDIA cards ranging from the GTX 950 to a GTX TITAN X into their test bench.  The TITAN X is a bit of overkill, unless somehow your display is capable of 200+ fps.  When you look at frames per second per dollar the GTX 950 came out on top, providing playable frame rates at a very low cost.  These results may change as AMD's Linux driver improves but for now NVIDIA is the way to go for those who game on Linux.


"Earlier this week I posted a graphics card comparison using the open-source drivers and looking at the best value and power efficiency. In today's article is a larger range of AMD Radeon and NVIDIA GeForce graphics cards being tested under a variety of modern Linux OpenGL games/demos while using the proprietary AMD/NVIDIA Linux graphics drivers to see how not only the raw performance compares but also the performance-per-Watt, overall power consumption, and performance-per-dollar metrics."

Here are some more Graphics Card articles from around the web:

Graphics Cards


Source: Phoronix

Podcast #367 - AMD R9 Nano, a Corsair GTX 980Ti, NVIDIA Pascal Rumors and more!

Subject: General Tech | September 17, 2015 - 12:00 PM |
Tagged: xps 12, video, TSMC, Steam Controller, r9 nano, podcast, pascal, nvidia, msi, hdplex h5, gtx 980ti sea hawk, fury x, Fiji, dell, corsair, amd

PC Perspective Podcast #367 - 09/17/2015

Join us this week as we discuss the AMD R9 Nano, a Corsair GTX 980Ti, NVIDIA Pascal Rumors and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

MSI and Corsair Launch Liquid Cooled GTX 980 Ti SEA HAWK

Subject: Graphics Cards | September 17, 2015 - 09:14 AM |
Tagged: nvidia, msi, liquid cooled, GTX980Ti SEA HAWK, GTX 980 Ti, graphics card, corsair

We reported last night on Corsair's new Hydro GFX, a liquid-cooled GTX 980 Ti powered by an MSI GPU, and MSI has their own new product based on this concept as well.


"The MSI GTX 980Ti SEA HAWK utilizes the popular Corsair H55 closed loop liquid-cooling solution. The micro-fin copper base takes care of an efficient heat transfer to the high-speed circulation pump. The low-profile aluminum radiator is easy to install and equipped with a super silent 120 mm fan with variable speeds based on the GPU temperature. However, to get the best performance, the memory and VRM need top-notch cooling as well. Therefore, the GTX 980Ti SEA HAWK is armed with a ball-bearing radial fan and a custom shroud design to ensure the best cooling performance for all components."

The MSI GTX 980 Ti Sea Hawk actually appears identical to the Corsair Hydro GFX, and a looking through the specs confirms the similarities:

  • NVIDIA GeForce GTX 980 Ti GPU
  • 2816 Processor Units
  • 1291 MHz/1190 MHz Boost/Base Core Clock
  • 6 GB 384-bit GDDR5 Memory
  • 7096 MHz Memory Clock
  • Dimensions: Card - 270x111x40 mm; Cooler - 151x118x52 mm
  • Weight: 1286 g
  • With a 1190 MHz Base and 1291 MHz Boost clock the SEA HAWK has the same factory overclock speeds as the Corsair-branded unit, and MSI is also advertising the card's potential to go further:

    "Even though the GTX 980Ti SEA HAWK boasts some serious clock speeds out-of-the-box, the MSI Afterburner overclocking utility allows users to go even further. Explore the limits with Triple Overvoltage, custom profiles and real-time hardware monitoring."

    I imagine the availability of this MSI branded product will be greater than the Corsair branded equivalent, but in either case you get a GTX 980 Ti with the potential to run as fast and cool as a custom cooled solution, without any of the extra work. Pricing wasn't immediately available this morning but expect something close to the $739 MSRP we saw with Corsair.

    Source: MSI

    Corsair and MSI Introduce Hydro GFX Liquid Cooled GeForce GTX 980 Ti

    Subject: Graphics Cards | September 16, 2015 - 09:00 PM |
    Tagged: nvidia, msi, liquid cooler, GTX 980 Ti, geforce, corsair, AIO

    A GPU with attached closed-loop liquid cooler is a little more mainstream these days with AMD's Fury X a high-profile example, and now a partnership between Corsair and MSI is bringing a very powerful NVIDIA option to the market.


    The new product is called the Hydro GFX, with NVIDIA's GeForce GTX 980 Ti supplying the GPU horsepower. Of course the advantage of a closed-loop cooler would be higher (sustained) clocks and lower temps/noise, which in turns means much better performance. Corsair explains:

    "Hydro GFX consists of a MSI GeForce GTX 980 Ti card with an integrated aluminum bracket cooled by a Corsair Hydro Series H55 liquid cooler.

    Liquid cooling keeps the card’s hottest, most critical components - the GPU, memory, and power circuitry - 30% cooler than standard cards while running at higher clock speeds with no throttling, boosting the GPU clock 20% and graphics performance up to 15%.

    The Hydro Series H55 micro-fin copper cooling block and 120mm radiator expels the heat from the PC reducing overall system temperature and noise. The result is faster, smoother frame rates at resolutions of 4K and beyond at whisper quiet levels."

    The factory overclock this 980 Ti is pretty substantial out of the box with a 1190 MHz Base (stock 1000 MHz) and 1291 MHz Boost clock (stock 1075 MHz). Memory is not overclocked (running at the default 7096 MHz), so there should still be some headroom for overclocking thanks to the air cooling for the RAM/VRM.


    A look at the box - and the Corsair branding

    Specs from Corsair:

    • NVIDIA GeForce GTX 980 Ti GPU with Maxwell 2.0 microarchitecture
    • 1190/1291 MHz base/boost clock
    • Clocked 20% faster than standard GeForce GTX 980 Ti cards for up to a 15% performance boost.
    • Integrated liquid cooling technology keeps GPU, video RAM, and voltage regulator 30% cooler than standard cards
    • Corsair Hydro Series H55 liquid cooler with micro-fin copper block, 120mm radiator/fan
    • Memory: 6GB GDDR5, 7096 MHz, 384-bit interface
    • Outputs: 3x DisplayPort 1.2, HDMI 2.0, and Dual Link DVI
    • Power: 250 watts (600 watt PSU required)
    • Requirements: PCI Express 3.0 16x dual-width slot, 8+6-pin power connector, 600 watt PSU
    • Dimensions: 10.5 x 4.376 inches
    • Warranty: 3 years
    • MSRP: $739.99

    As far as pricing/availability goes Corsair says the new card will debut in October in the U.S. with an MSRP of $739.99.

    Source: Corsair

    Report: TSMC To Produce NVIDIA Pascal On 16 nm FinFET

    Subject: Graphics Cards | September 16, 2015 - 09:16 AM |
    Tagged: TSMC, Samsung, pascal, nvidia, hbm, graphics card, gpu

    According to a report by BusinessKorea TSMC has been selected to produce the upcoming Pascal GPU after initially competing with Samsung for the contract.


    Though some had considered the possibility of both Samsung and TSMC sharing production (albeit on two different process nodes, as Samsung is on 14 nm FinFET), in the end the duties fall on TSMC's 16 nm FinFET alone if this report is accurate. The move is not too surprising considering the longstanding position TSMC has maintained as a fab for GPU makers and Samsung's lack of experience in this area.

    The report didn't make the release date for Pascal any more clear, naming it "next year" for the new HBM-powered GPU, which will also reportedly feature 16 GB of HBM 2 memory for the flagship version of the card. This would potentially be the first GPU released at 16 nm (unless AMD has something in the works before Pascal's release), as all current AMD and NVIDIA GPUs are manufactured at 28 nm.

    Virtual Reality as an Art Form? Artists Compete Using VR at PAX Prime

    Subject: General Tech, Shows and Expos | September 15, 2015 - 01:07 PM |
    Tagged: VR, virtual reality, Tilt Brush, PAX Prime 2015, paint, nvidia, art

    A group of six artists from the gaming industry were brought together at this month's PAX Prime event in Seattle in a joint vebture between NVIDIA, Valve, Google and HTC. The idea? To use virtual reality to create art. The result was very interesting, to say the least.

    Wearing HTC’s VR headset the artists had 30 minutes each to create their work using Tilt Brush. What is Tilt Brush, exactly?

    "Tilt Brush uses the HTC Vive’s unique hand controllers and positional tracking to allow artists to paint in three dimensions. The software includes a remarkable digital palette, letting users draw GPU-powered real-time effects like fire, smoke and light."

    The artists included Chandana Ekanayake from Uber Entertainment, Lee Petty from Double Fine Productions, Michael Shilliday from Whiterend Creative, Mike Krahulik from Penny Arcade, Sarah Northway from Northway Games and Tristan Reidford from Valve.

    NVIDIA is hosting a contest to pick the winner on their Facebook page; so what's in it for you? "The artist with the most votes will win ultimate bragging rights, and voters will be entered to win a new GeForce GTX 980 Ti!" Not bad.

    This is certainly a novel application of VR, but serves to illustrate (pun intended) that the tech really does provide endless possibilities - far beyond 3D art or gameplay immersion.

    Source: NVIDIA Blogs

    The premium priced ASUS GTX 980 Ti STRIX DCIII OC certainly does perform

    Subject: Graphics Cards | September 8, 2015 - 05:56 PM |
    Tagged: STRIX DirectCU III OC, nvidia, factory overclocked, asus, 980 Ti

    The ASUS GTX 980 Ti STRIX DCIII OC comes with the newest custom cooler from ASUS and a fairly respectable factory overclock of 1216MHz, 1317MHz boost and a 7.2GHz effective clock on the impressive 6GB of VRAM.  Once [H]ard|OCP had a chance to use GPUTweak II those values were increased to 1291MHz, 1392MHz boost and a 6GB VRAM clock with manual tweaking, for those who prefer automated OCing there are three modes which range from Silent to OC mode that will instantly get you ready to use the card.  With an MSRP of $690 and a street price usually over $700 you have to be ready to invest a lot of hard earned cash into this card but at 4k resolutions it does outperform the Fury X by a noticeable margin.


    "Today we have the custom built ASUS GTX 980 Ti STRIX DirectCU III OC 6GB video card. It features a factory overclock, extreme cooling capabilities and state of the art voltage regulation. We compare it to the AMD Radeon R9 Fury, and overclock the ASUS GTX 980 Ti STRIX DCIII to its highest potential and look at some 4K playability."

    Here are some more Graphics Card articles from around the web:

    Graphics Cards


    Source: [H]ard|OCP
    Manufacturer: PC Perspective

    To the Max?

    Much of the PC enthusiast internet, including our comments section, has been abuzz with “Asynchronous Shader” discussion. Normally, I would explain what it is and then outline the issues that surround it, but I would like to swap that order this time. Basically, the Ashes of the Singularity benchmark utilizes Asynchronous Shaders in DirectX 12, but they disable it (by Vendor ID) for NVIDIA hardware. They say that this is because, while the driver reports compatibility, “attempting to use it was an unmitigated disaster in terms of performance and conformance”.


    AMD's Robert Hallock claims that NVIDIA GPUs, including Maxwell, cannot support the feature in hardware at all, while all AMD GCN graphics cards do. NVIDIA has yet to respond to our requests for an official statement, although we haven't poked every one of our contacts yet. We will certainly update and/or follow up if we hear from them. For now though, we have no idea whether this is a hardware or software issue. Either way, it seems more than just politics.

    So what is it?

    Simply put, Asynchronous Shaders allows a graphics driver to cram workloads in portions of the GPU that are idle, but not otherwise available. For instance, if a graphics task is hammering the ROPs, the driver would be able to toss an independent physics or post-processing task into the shader units alongside it. Kollock from Oxide Games used the analogy of HyperThreading, which allows two CPU threads to be executed on the same core at the same time, as long as it has the capacity for it.

    Kollock also notes that compute is becoming more important in the graphics pipeline, and it is possible to completely bypass graphics altogether. The fixed-function bits may never go away, but it's possible that at least some engines will completely bypass it -- maybe even their engine, several years down the road.

    I wonder who would pursue something so silly, whether for a product or even just research.

    But, like always, you will not get an infinite amount of performance by reducing your waste. You are always bound by the theoretical limits of your components, and you cannot optimize past that (except for obviously changing the workload itself). The interesting part is: you can measure that. You can absolutely observe how long a GPU is idle, and represent it as a percentage of a time-span (typically a frame).

    And, of course, game developers profile GPUs from time to time...

    According to Kollock, he has heard of some console developers getting up to 30% increases in performance using Asynchronous Shaders. Again, this is on console hardware and so this amount may increase or decrease on the PC. In an informal chat with a developer at Epic Games, so massive grain of salt is required, his late night ballpark “totally speculative” guesstimate is that, on the Xbox One, the GPU could theoretically accept a maximum ~10-25% more work in Unreal Engine 4, depending on the scene. He also said that memory bandwidth gets in the way, which Asynchronous Shaders would be fighting against. It is something that they are interested in and investigating, though.


    This is where I speculate on drivers. When Mantle was announced, I looked at its features and said “wow, this is everything that a high-end game developer wants, and a graphics developer absolutely does not”. From the OpenCL-like multiple GPU model taking much of the QA out of SLI and CrossFire, to the memory and resource binding management, this should make graphics drivers so much easier.

    It might not be free, though. Graphics drivers might still have a bunch of games to play to make sure that work is stuffed through the GPU as tightly packed as possible. We might continue to see “Game Ready” drivers in the coming years, even though much of that burden has been shifted to the game developers. On the other hand, maybe these APIs will level the whole playing field and let all players focus on chip design and efficient injestion of shader code. As always, painfully always, time will tell.