Podcast #369 - Fable Legends DX12 Benchmark, Apple A9 SoC, Intel P3608 SSD, and more!

Subject: General Tech | October 1, 2015 - 02:17 PM |
Tagged: podcast, video, fable legends, dx12, apple, A9, TSMC, Samsung, 14nm, 16nm, Intel, P3608, NVMe, logitech, g410, TKL, nvidia, geforce now, qualcomm, snapdragon 820

PC Perspective Podcast #369 - 10/01/2015

Join us this week as we discuss the Fable Legends DX12 Benchmark, Apple A9 SoC, Intel P3608 SSD, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, and Allyn Malventano

Program length: 1:42:35

  1. Week in Review:
  2. 0:54:10 This episode of PC Perspective is brought to you by…Zumper, the quick and easy way to find your next apartment or home rental. To get started and to find your new home go to http://zumper.com/PCP
  3. News item of interest:
  4. Hardware/Software Picks of the Week:
  5. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Subject: General Tech
Manufacturer: NVIDIA

Setup, Game Selection

Yesterday NVIDIA officially announced the new GeForce NOW streaming game service, the conclusion to the years-long beta and development process known as NVIDIA GRID. As I detailed on my story yesterday about the reveal, GeForce NOW is a $7.99/mo. subscription service that will offer on-demand, cloud-streamed games to NVIDIA SHIELD devices, including a library of 60 games for that $7.99/mo. fee in addition to 7 titles in the “purchase and play” category. There are several advantages that NVIDIA claims make GeForce NOW a step above any other streaming gaming service including PlayStation Now, OnLive and others. Those include load times, resolution and frame rate, combined local PC and streaming game support and more.


I have been able to use and play with the GeForce NOW service on our SHIELD Android TV device in the office for the last few days and I thought I would quickly go over my initial thoughts and impressions up to this point.

Setup and Availability

If you have an NVIDIA SHIELD Android TV (or a SHIELD Tablet) then the setup and getting started process couldn’t be any simpler for new users. An OS update is pushed that changes the GRID application on your home screen to GeForce NOW and you can sign in using your existing Google account on your Android device, making payment and subscription simple to manage. Once inside the application you can easily browse through the included streaming games or look through the smaller list of purchasable games and buy them if you so choose.


Playing a game is as simple and selecting title from the grid list and hitting play.

Game Selection

Let’s talk about that game selection first. For $7.99/mo. you get access to 60 titles for unlimited streaming. I have included a full list below, originally posted in our story yesterday, for reference.

Continue reading my initial thoughts and an early review of GeForce NOW!!

NVIDIA Publishes DirectX 12 Tips for Developers

Subject: Graphics Cards | September 26, 2015 - 09:10 PM |
Tagged: microsoft, windows 10, DirectX 12, dx12, nvidia

Programming with DirectX 12 (and Vulkan, and Mantle) is a much different process than most developers are used to. The biggest change is how work is submit to the driver. Previously, engines would bind attributes to a graphics API and issue one of a handful of “draw” commands, which turns the current state of the API into a message. Drivers would play around with queuing them and manipulating them, to optimize how these orders are sent to the graphics device, but the game developer had no control over that.


Now, the new graphics APIs are built more like command lists. Instead of bind, call, bind, call, and so forth, applications request queues to dump work into, and assemble the messages themselves. It even allows these messages to be bundled together and sent as a whole. This allows direct control over memory and the ability to distribute a lot of the command control across multiple CPU cores. Applications are only as fast as its slowest (relevant) thread, so the ability to spread work out increases actual performance.

NVIDIA has created a large list of things that developers should do, and others that they should not, to increase performance. Pretty much all of them apply equally, regardless of graphics vendor, but there are a few NVIDIA-specific comments, particularly the ones about NvAPI at the end and a few labeled notes in the “Root Signatures” category.

The tips are fairly diverse, covering everything from how to efficiently use things like command lists, to how to properly handle multiple GPUs, and even how to architect your engine itself. Even if you're not a developer, it might be interesting to look over to see how clues about what makes the API tick.

Source: NVIDIA

The Fable of the uncontroversial benchmark

Subject: Graphics Cards | September 24, 2015 - 02:53 PM |
Tagged: radeon, nvidia, lionhead, geforce, fable legends, fable, dx12, benchmark, amd

By now you should have memorized Ryan's review of Fable's DirectX 12 performance on a variety of cards and hopefully tried out our new interactive IFU charts.  You can't always cover every card, as those who were brave enough to look at the CSV file Ryan provided might have come to realize.  That's why it is worth peeking at The Tech Report's review after reading through ours.  They have included an MSI R9 285 and XFX R9 390 as well as an MSI GTX 970, which may be cards you are interested in seeing.  They also spend some time looking at CPU scaling and the effect that has on AMD and NVIDIA's performance.  Check it out here.


"Fable Legends is one of the first games to make use of DirectX 12, and it produces some truly sumptuous visuals. Here's a look at how Legends performs on the latest graphics cards."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Manufacturer: Lionhead Studios

Benchmark Overview

When approached a couple of weeks ago by Microsoft with the opportunity to take an early look at an upcoming performance benchmark built on a DX12 game pending release later this year, I of course was excited for the opportunity. Our adventure into the world of DirectX 12 and performance evaluation started with the 3DMark API Overhead Feature Test back in March and was followed by the release of the Ashes of the Singularity performance test in mid-August. Both of these tests were pinpointing one particular aspect of the DX12 API - the ability to improve CPU throughput and efficiency with higher draw call counts and thus enabling higher frame rates on existing GPUs.


This game and benchmark are beautiful...

Today we dive into the world of Fable Legends, an upcoming free to play based on the world of Albion. This title will be released on the Xbox One and for Windows 10 PCs and it will require the use of DX12. Though scheduled for release in Q4 of this year, Microsoft and Lionhead Studios allowed us early access to a specific performance test using the UE4 engine and the world of Fable Legends. UPDATE: It turns out that the game will have a fall-back DX11 mode that will be enabled if the game detects a GPU incapable of running DX12.

This benchmark focuses more on the GPU side of DirectX 12 - on improved rendering techniques and visual quality rather than on the CPU scaling aspects that made Ashes of the Singularity stand out from other graphics tests we have utilized. Fable Legends is more representative of what we expect to see with the release of AAA games using DX12. Let's dive into the test and our results!

Continue reading our look at the new Fable Legends DX12 Performance Test!!

Phoronix Looks at NVIDIA's Linux Driver Quality Settings

Subject: Graphics Cards | September 22, 2015 - 09:09 PM |
Tagged: nvidia, linux, graphics drivers

In the NVIDIA driver control panel, there is a slider that controls Performance vs Quality. On Windows, I leave it set to “Let the 3D application decide” and change my 3D settings individually, as needed. I haven't used NVIDIA's control panel on Linux too much, mostly because my laptop is what I usually install Linux on, which runs an AMD GPU, but the UI seems to put a little more weight on it.


Or is that GTux?

Phoronix decided to test how each of these settings affects a few titles, and the only benchmark they bothered reporting is Team Fortress 2. It turns out that other titles see basically zero variance. TF2 saw a difference of 6FPS though, from 115 FPS at High Quality to 121 FPS at Quality. Oddly enough, Performance and High Performance were worse performance than Quality.

To me, this sounds like NVIDIA has basically forgot about the feature. It barely affects any title, the game it changes anything measureable in is from 2007, and it contradicts what the company is doing on other platforms. I predict that Quality is the default, which is the same as Windows (albeit with only 3 choices: “Performance”, “Balanced”, and the default “Quality”). If it is, you probably should just leave it there 24/7 in case NVIDIA has literally not thought about tweaking the other settings. On Windows, it is kind-of redundant with GeForce Experience, anyway.

Final note: Phoronix has only tested the GTX 980. Results may vary elsewhere, but probably don't.

Source: Phoronix
Manufacturer: NVIDIA

Pack a full GTX 980 on the go!

For many years, the idea of a truly mobile gaming system has been attainable if you were willing to pay the premium for high performance components. But anyone that has done research in this field would tell you that though they were named similarly, the mobile GPUs from both AMD and NVIDIA had a tendency to be noticeably slower than their desktop counterparts. A GeForce GTX 970M, for example, only had a CUDA core count that was slightly higher than the desktop GTX 960, and it was 30% lower than the true desktop GTX 970 product. So even though you were getting fantastic mobile performance, there continued to be a dominant position that desktop users held over mobile gamers in PC gaming.

This fall, NVIDIA is changing that with the introduction of the GeForce GTX 980 for gaming notebooks. Notice I did not put an 'M' at the end of that name; it's not an accident. NVIDIA has found a way, through binning and component design, to cram the entirety of a GM204-based Maxwell GTX 980 GPU inside portable gaming notebooks.


The results are impressive and the implications for PC gamers are dramatic. Systems built with the GTX 980 will include the same 2048 CUDA cores, 4GB of GDDR5 running at 7.0 GHz and will run at the same base and typical GPU Boost clocks as the reference GTX 980 cards you can buy today for $499+. And, while you won't find this GPU in anything called a "thin and light", 17-19" gaming laptops do allow for portability of gaming unlike any SFF PC.

So how did they do it? NVIDIA has found a way to get a desktop GPU with a 165 watt TDP into a form factor that has a physical limit of 150 watts (for the MXM module implementations at least) through binning, component selection and improved cooling. Not only that, but there is enough headroom to allow for some desktop-class overclocking of the GTX 980 as well.

Continue reading our preview of the new GTX 980 for notebooks!!

What to use for 1080p on Linux or your future SteamOS machine

Subject: Graphics Cards | September 17, 2015 - 03:34 PM |
Tagged: linux, amd, nvidia

If you are using a 1080p monitor or perhaps even outputting to a large 1080p TV, there is no point in picking up a $500+ GPU as you will not be using the majority of its capabilities.  Phoronix has just done research on what GPU offers you the best value for gaming at that resolution, putting five AMD GPUs from the Radeon R9 270X to the R9 Fury and six NVIDIA cards ranging from the GTX 950 to a GTX TITAN X into their test bench.  The TITAN X is a bit of overkill, unless somehow your display is capable of 200+ fps.  When you look at frames per second per dollar the GTX 950 came out on top, providing playable frame rates at a very low cost.  These results may change as AMD's Linux driver improves but for now NVIDIA is the way to go for those who game on Linux.


"Earlier this week I posted a graphics card comparison using the open-source drivers and looking at the best value and power efficiency. In today's article is a larger range of AMD Radeon and NVIDIA GeForce graphics cards being tested under a variety of modern Linux OpenGL games/demos while using the proprietary AMD/NVIDIA Linux graphics drivers to see how not only the raw performance compares but also the performance-per-Watt, overall power consumption, and performance-per-dollar metrics."

Here are some more Graphics Card articles from around the web:

Graphics Cards


Source: Phoronix

Podcast #367 - AMD R9 Nano, a Corsair GTX 980Ti, NVIDIA Pascal Rumors and more!

Subject: General Tech | September 17, 2015 - 12:00 PM |
Tagged: xps 12, video, TSMC, Steam Controller, r9 nano, podcast, pascal, nvidia, msi, hdplex h5, gtx 980ti sea hawk, fury x, Fiji, dell, corsair, amd

PC Perspective Podcast #367 - 09/17/2015

Join us this week as we discuss the AMD R9 Nano, a Corsair GTX 980Ti, NVIDIA Pascal Rumors and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

MSI and Corsair Launch Liquid Cooled GTX 980 Ti SEA HAWK

Subject: Graphics Cards | September 17, 2015 - 09:14 AM |
Tagged: nvidia, msi, liquid cooled, GTX980Ti SEA HAWK, GTX 980 Ti, graphics card, corsair

We reported last night on Corsair's new Hydro GFX, a liquid-cooled GTX 980 Ti powered by an MSI GPU, and MSI has their own new product based on this concept as well.


"The MSI GTX 980Ti SEA HAWK utilizes the popular Corsair H55 closed loop liquid-cooling solution. The micro-fin copper base takes care of an efficient heat transfer to the high-speed circulation pump. The low-profile aluminum radiator is easy to install and equipped with a super silent 120 mm fan with variable speeds based on the GPU temperature. However, to get the best performance, the memory and VRM need top-notch cooling as well. Therefore, the GTX 980Ti SEA HAWK is armed with a ball-bearing radial fan and a custom shroud design to ensure the best cooling performance for all components."

The MSI GTX 980 Ti Sea Hawk actually appears identical to the Corsair Hydro GFX, and a looking through the specs confirms the similarities:

  • NVIDIA GeForce GTX 980 Ti GPU
  • 2816 Processor Units
  • 1291 MHz/1190 MHz Boost/Base Core Clock
  • 6 GB 384-bit GDDR5 Memory
  • 7096 MHz Memory Clock
  • Dimensions: Card - 270x111x40 mm; Cooler - 151x118x52 mm
  • Weight: 1286 g
  • With a 1190 MHz Base and 1291 MHz Boost clock the SEA HAWK has the same factory overclock speeds as the Corsair-branded unit, and MSI is also advertising the card's potential to go further:

    "Even though the GTX 980Ti SEA HAWK boasts some serious clock speeds out-of-the-box, the MSI Afterburner overclocking utility allows users to go even further. Explore the limits with Triple Overvoltage, custom profiles and real-time hardware monitoring."

    I imagine the availability of this MSI branded product will be greater than the Corsair branded equivalent, but in either case you get a GTX 980 Ti with the potential to run as fast and cool as a custom cooled solution, without any of the extra work. Pricing wasn't immediately available this morning but expect something close to the $739 MSRP we saw with Corsair.

    Source: MSI