Phoronix Tests Almost a Decade of GPUs

Subject: Graphics Cards | January 20, 2016 - 08:26 PM |
Tagged: nvidia, linux, tesla, fermi, kepler, maxwell

It's nice to see long-term roundups every once in a while. They do not really provide useful information for someone looking to make a purchase, but they show how our industry is changing (or not). In this case, Phoronix tested twenty-seven NVIDIA GeForce cards across four architectures: Tesla, Fermi, Kepler, and Maxwell. In other words, from the GeForce 8 series all the way up to the GTX 980 Ti.


Image Credit: Phoronix

Nine years of advancements in ASIC design, with a doubling time-step of 18 months, should yield a 64-fold improvement. The number of transistors falls short, showing about a 12-fold improvement between the Titan X and the largest first-wave Tesla, although that means nothing for a fabless semiconductor designer. The main reason why I include this figure is to show the actual Moore's Law trend over this time span, but it also highlights the slowdown in process technology.

Performance per watt does depend on NVIDIA though, and the ratio between the GTX 980 Ti and the 8500 GT is about 72:1. While this is slightly better than the target 64:1 ratio, these parts are from very different locations in their respective product stacks. Swapping the 8500 GT for the following year's 9800 GTX, which leads to a comparison between top-of-the-line GPUs of their respective times, and you see a 6.2x improvement in performance per watt versus the GTX 980 Ti. On the other hand, that part was outstanding for its era.

I should note that each of these tests take place on Linux. It might not perfectly reflect the landscape on Windows, but again, it's interesting in its own right.

Source: Phoronix

Report: NVIDIA Preparing GeForce GTX 980MX and 970MX Mobile GPUs

Subject: Graphics Cards | January 19, 2016 - 03:31 PM |
Tagged: rumor, report, nvidia, GTX 980MX, GTX 980M, GTX 970MX, GTX 970M, geforce

NVIDIA is reportedly preparing faster mobile GPUs based on Maxwell, with a GTX 980MX and 970MX on the way.


The new GTX 980MX would sit between the GTX 980M and the laptop version of the full GTX 980, with 1664 CUDA cores (compared to 1536 with the 980M), 104 Texture Units (up from the 980M's 96), a 1048 MHz core clock, and up to 8 GB of GDDR5. Memory speed and bandwidth will reportedly be identical to the GTX 980M at 5000 MHz and 160 GB/s respectively, with both GPUs using a 256-bit memory bus.

The GTX 970MX represents a similar upgrade over the existing GTX 970M, with CUDA Core count increased from 1280 to 1408, Texture Units up from 80 to 88, and 8 additional raster devices available (56 vs. 48). Both the 970M and 970MX use 192-bit GDDR5 clocked at 5000 MHz, and available with the same 3 GB or 6 GB of frame buffer.

WCCFtech prepared a chart to demonstrate the differences between NVIDIA's mobile offerings:

Model GeForce GTX 980 Laptop Version GeForce GTX 980MX

GeForce GTX 980M

GeForce GTX 970MX GeForce GTX 970M GeForce GTX 965M

GeForce GTX 960M


Maxwell Maxwell Maxwell Maxwell Maxwell Maxwell Maxwell
GPU GM204 GM204 GM204 GM204 GM204 GM204 GM107
CUDA Cores 2048 1664 1536 1408 1280 1024 640
Texture Units 128 104 96 88 80 64 40
Raster Devices 64 64 64 56 48 32 16
Clock Speed 1218 MHz 1048 MHz 1038 MHz 941 MHz 924 MHz 950 MHz 1097 MHz
Memory Bus 256-bit 256-bit 256-bit 192-bit 192-bit 128-bit 128-bit
Frame Buffer 8 GB GDDR5 8/4 GB GDDR5 8/4 GB GDDR5 6/3 GB GDDR5 6/3 GB GDDR5 4 GB GDDR5 4 GB GDDR5
Memory Frequency 7008 MHz 5000 MHz 5000 MHz 5000 MHz 5000 MHz 5000 MHz 5000 MHz
Memory Bandwidth 224 GB/s 160 GB/s 160 GB/s 120 GB/s 120 GB/s 80 GB/s 80 GB/s
TDP ~150W 125W 125W 100W 100W 90W 75W

These new GPUs will reportedly be based on the same Maxwell GM204 core, and TDPs are apparently unchanged at 125W for the GTX 980MX, and 100W for the 970MX.

We will await any official announcement.

Source: WCCFtech
Subject: Displays
Manufacturer: Acer

UltraWide G-Sync Arrives

When NVIDIA first launched G-Sync monitors, they had the advantage of being first to literally everything. They had the first variable refresh rate technology, the first displays of any kind that supported it and the first ecosystem to enable it. AMD talked about FreeSync just a few months later, but it wasn't until March of 2015 that we got our hands on the first FreeSync enabled display, and it was very much behind the experience provided by G-Sync displays. That said, what we saw with that launch, and continue to see as time goes on, is that there are a much higher quantity of FreeSync options, with varying specifications and options, compared to what NVIDIA has built out. 

This is important to note only because, as we look at the Acer Predator X34 monitor today, the first 34-in curved panel to support G-Sync, it comes 3 months after the release of the similarly matched monitor from Acer that worked with AMD FreeSync. The not-as-sexyily-named Acer XR341CK offers a 3440x1440 resolution, 34-in curved IPS panel and a 75Hz refresh rate. 


But, as NVIDIA tends to do, they found a way to differentiate its own products, with the help of Acer. The Predator X34 monitor has a unique look and style to it, and it improves the maximum refresh rate to 100Hz (although that is considered overclocking). The price is a bit higher too, coming in at $1300 or so on; the FreeSync-enabled XR341CK monitor sells for just $941.

Continue reading our review of the Acer Predator X34 G-Sync Monitor!!

GeForce Hotfix Driver 361.60 Released

Subject: Graphics Cards | January 13, 2016 - 01:11 AM |
Tagged: graphics drivers, graphics driver, nvidia

NVIDIA has been pushing for WHQL certification for their drivers, but sometimes issues slip through QA, both at Microsoft and their own, internal team(s). Sometimes these issues will be fixed in a future release, but sometimes they push out a “HotFix” driver immediately. This is often great for people who experience the problems, but they should not be installed otherwise.


In this case, GeForce Hotfix driver 361.60 fixes two issues. One is listed as “install & clocking related issues,” which refers to the GPU memory clock. According to Manuel Guzman of NVIDIA, some games and software was not causing the driver to fully wake the memory clock to a high-performance state. The other issue is “Crashes in Photoshop & Illustrator,” which fixes blue screen issues in both software, and possibly other programs that use the GPU in similar ways. I've never seen GeForce Driver 361.43 cause a BSOD in Photoshop, but I am a few versions behind with CS5.5.

Download links are available at NVIDIA Support, but unaffected users should just wait for an official driver in case the patch causes other issues, due to its minimal QA.

Source: NVIDIA

Report: NVIDIA Pascal GP104 Discovered, May Not Use HBM

Subject: Graphics Cards | January 11, 2016 - 11:05 PM |
Tagged: rumor, report, pascal, nvidia, HBM2, hbm, GP104

A delivery of GPUs and related test equipment from Taiwan to Banglore has led to speculation about NVIDIA's upcoming GP104 Pascal GPU.


Image via

How much information can be gleaned from an import shipping manifest (linked here)? The data indicates a chip with a 37.5 x 37.5 mm package and 2152 pins, which is being attributed to the GP104 based on knowledge of “earlier, similar deliveries” (or possible inside information). This has prompted members of the forums (German language) to speculate on the use of GDDR5 or GDDR5X memory based on the likelihood of HBM being implemented on a die of this size. 

Of course, NVIDIA has stated that Pascal will implement 3D memory, and the upcoming GP100 will reportedly be on a 55 x 55 mm package using HBM2. Could this be a new, lower-cost part using the existing GDDR5 standard or the faster GDDR5X instead? VideoCardz and WCCFtech have posted stories based on the 3DCenter report, and to quote directly from the VideoCardz post on the subject:

"3DCenter has a theory that GP104 could actually not use HBM, but GDDR5(X) instead. This would rather be a very strange decision, but could NVIDIA possibly make smaller GPU (than GM204) and still accommodate 4 HBM modules? This theory is not taken from the thin air. The GP100 aka the Big Pascal, would supposedly come in 55x55mm BGA package. That’s 10mm more than GM200, which were probably required for additional HBM modules. Of course those numbers are for the whole package (with interposer), not just the GPU."

All of this is a lot to take from a shipping record that might not even be related to an NVIDIA product, but the report has made the rounds at this point so now we’ll just have to wait for new information.


CES 2016: Rise of the Tomb Raider NVIDIA Bundle

Subject: Graphics Cards, Shows and Expos | January 7, 2016 - 07:03 PM |
Tagged: square enix, nvidia, CES 2016, CES

NVIDIA has just announced a new game bundle. If you purchase an NVIDIA GeForce GTX 970, GTX 980 desktop or mobile, GTX 980 Ti, GTX 980M, or GTX 970M, then you will receive a free copy of Rise of the Tomb Raider. As always, make sure the retailer is selling the participating card. If the product has a download code, it will be specially marked. NVIDIA will not upgrade non-participating stock to the bundle.


Rise of the Tomb Raider will go live on January 29th. It was originally released in November as an Xbox One timed exclusive. It will also arrive on the PlayStation 4, but not until “holiday,” which is probably around Q4 (or maybe late Q3).

If you purchase the bundle, then you graphics card will obviously be powerful enough to run the game. At a minimum, you will require a GeForce GTX 650 (2GB) or an AMD HD 7770 (2GB). The CPU needs are light too, requiring just a Sandy Bridge Core i3 (Intel Core i3-2100) or AMD's equivalent. Probably the only concern would be the minimum of 6GB system RAM, which also requires a 64-bit operating system. Now that the Xbox 360 and PlayStation 3 have been deprecated, 32-bit gaming will be increasingly rare for “AAA” titles. That said, we've been ramping up to 64-bit for the last decade. one of the first games that supported x86-64 was Unreal Tournament 2004.

The Rise of the Tomb Raider NVIDIA bundle starts today.

Coverage of CES 2016 is brought to you by Logitech!

PC Perspective's CES 2016 coverage is sponsored by Logitech.

Follow all of our coverage of the show at!

Source: NVIDIA

CES 2016: NVIDIA talks SHIELD Updates and VR-Ready Systems

Subject: Graphics Cards, Shows and Expos | January 6, 2016 - 02:39 AM |
Tagged: vr ready, VR, virtual reality, video, Oculus, nvidia, htc, geforce, CES 2016, CES

Other than the in-depth discussion from NVIDIA on the Drive PX 2 and its push into autonomous driving, NVIDIA didn't have much other news to report. We stopped by the suite and got a few updates on SHIELD and the company's VR Ready program to certify systems that meet minimum recommended specifications for a solid VR experience.

For the SHIELD,  NVIDIA is bringing Android 6.0 Marshmallow to the device, with new features like shared storage and the ability to customize the home screen of the Android TV interface. Nothing earth shattering and all of it is part of the 6.0 rollout. 

The VR Ready program from NVIDIA will validate notebooks, systems and graphics cards that have the amount of horsepower to meet the minimum performance levels for a good VR experience. At this point, the specs essentially match up with what Oculus has put forth: a GTX 970 or better on the desktop and a GTX 980 (full, not 980M) on mobile. 

Other than that, Ken and I took in some of the more recent VR demos including Epic's Bullet Train on the final Oculus Rift and Google's Tilt Brush on the latest iteration of the HTC Vive. Those were both incredibly impressive though the Everest demo that simulates a portion of the mountain climb was the one that really made me feel like I was somewhere else.

Check out the video above for more impressions!

Coverage of CES 2016 is brought to you by Logitech!

PC Perspective's CES 2016 coverage is sponsored by Logitech.

Follow all of our coverage of the show at!

Source: NVIDIA

CES 2016 Podcast Day 1 - Lenovo, NVIDIA Press Conference, new AMD GPUs and more!

Subject: General Tech | January 5, 2016 - 09:40 AM |
Tagged: podcast, video, CES, CES 2016, Lenovo, Thinkpad, x1 carbon, x1 yoga, nvidia, pascal, amd, Polaris, FinFET, 14nm

CES 2016 Podcast Day 1 - 01/05/16

CES is just beginning. Join us for announcements from Lenovo, NVIDIA Press Conference, new AMD GPUs and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Josh Walrath, Allyn Malventano, Ken Addison and Sebastian Peak

Program length: 1:11:05

Be sure to subscribe to the PC Perspective YouTube channel!!

CES 2016: NVIDIA Launches DRIVE PX 2 With Dual Pascal GPUs Driving A Deep Neural Network

Subject: General Tech | January 5, 2016 - 06:17 AM |
Tagged: tegra, pascal, nvidia, driveworks, drive px 2, deep neural network, deep learning, autonomous car

NVIDIA is using the Consumer Electronics Show to launch the Drive PX 2 which is the latest bit of hardware aimed at autonomous vehicles. Several NVIDIA products combine to create the company's self-driving "end to end solution" including DIGITS, DriveWorks, and the Drive PX 2 hardware to train, optimize, and run the neural network software that will allegedly be the brains of future self-driving cars (or so NVIDIA hopes).

NVIDIA DRIVE PX 2 Self Driving Car Supercomputer.jpg

The Drive PX 2 hardware is the successor to the Tegra-powered Drive PX released last year. The Drive PX 2 represents a major computational power jump with 12 CPU cores and two discrete "Pascal"-based GPUs! NVIDIA has not revealed the full specifications yet, but they have made certain details available. There are two Tegra SoCs along with two GPUs that are liquid cooled. The liquid cooling consists of a large metal block with copper tubing winding through it and then passing into what looks to be external connectors that attach to a completed cooling loop (an exterior radiator, pump, and reservoir).

There are a total of 12 CPU cores including eight ARM Cortex A57 cores and four "Denver" cores. The discrete graphics are based on the 16nm FinFET process and will use the company's upcoming Pascal architecture. The total package will draw a maximum of 250 watts and will offer up to 8 TFLOPS of computational horsepower and 24 trillion "deep learning operations per second." That last number relates to the number of special deep learning instructions the hardware can process per second which, if anything, sounds like an impressive amount of power when it comes to making connections and analyzing data to try to classify it. Drive PX 2 is, according to NVIDIA, 10 times faster than it's predecessor at running these specialized instructions and has nearly 4 times the computational horsepower when it comes to TLOPS.

Similar to the original Drive PX, the driving AI platform can accept and process the inputs of up to 12 video cameras. It can also handle LiDAR, RADAR, and ultrasonic sensors. NVIDIA compared the Drive PX 2 to the TITAN X in its ability to process 2,800 images per second versus the consumer graphics card's 450 AlexNet images which while possibly not the best comparison does make it look promising.


Neural networks and machine learning are at the core of what makes autonomous vehicles possible along with hardware powerful enough to take in a multitude of sensor data and process it fast enough. The software side of things includes the DriveWorks development kit which includes specialized instructions and a neural network that can detect objects based on sensor input(s), identify and classify them, determine the positions of objects relative to the vehicle, and calculate the most efficient path to the destination.

Specifically, in the press release NVIDIA stated:

"This complex work is facilitated by NVIDIA DriveWorks™, a suite of software tools, libraries and modules that accelerates development and testing of autonomous vehicles. DriveWorks enables sensor calibration, acquisition of surround data, synchronization, recording and then processing streams of sensor data through a complex pipeline of algorithms running on all of the DRIVE PX 2's specialized and general-purpose processors. Software modules are included for every aspect of the autonomous driving pipeline, from object detection, classification and segmentation to map localization and path planning."

DIGITS is the platform used to train the neural network that is then used by the Drive PX 2 hardware. The software is purportedly improving in both accuracy and training time with NVIDIA achieving a 96% accuracy rating at identifying traffic signs based on the traffic sign database from Ruhr University Bochum after a training session lasting only 4 hours as opposed to training times of days or even weeks.

NVIDIA claims that the initial Drive PX has been picked up by over 50 development teams (automakers, universities, software developers, et al) interested in autonomous vehicles. Early access to development hardware is expected to be towards the middle of the year with general availability of final hardware in Q4 2016.

The new Drive PX 2 is getting a serious hardware boost with the inclusion of two dedicated graphics processors (the Drive PX was based around two Tegra X1 SoCs), and that should allow automakers to really push what's possible in real time and push the self-driving car a bit closer to reality and final (self) drive-able products. I'm excited to see that vision come to fruition and am looking forward to seeing what this improved hardware will enable in the auto industry!

Coverage of CES 2016 is brought to you by Logitech!

PC Perspective's CES 2016 coverage is sponsored by Logitech.

Follow all of our coverage of the show at!

Source: NVIDIA
Subject: Mobile
Manufacturer: MSI

Design and Compute Performance

I'm going to be honest with you right off the bat: there isn't much more I can say about the MSI GT72S notebook that hasn't already been said either on this website or on the PC Perspective Podcast. Though there are many iterations of this machine, the version we are looking at today is known as the "GT72S Dominator Pro G Dragon-004" and it includes some impressive hardware and design choices. Perhaps you've heard of this processor called "Skylake" and a GPU known as the "GTX 980"? 


The GT72S is a gaming notebook in the truest sense of the term. It is big, heavy and bulky, not meant for daily travel or walking around campus for very long distances. It has a 17-in screen, more USB 3.0 ports than most desktop computers and also more gaming horsepower than we've ever seen crammed into that kind of space. That doesn't make it perfect for everyone of course: battery life is poor and you may have to sell one of your kids to be able to afford it. But then, you might be able to afford A LOT if you sold the kids, amiright?


Let's dive into what makes the new MSI GT72S so impressive and why every PC gamer that has a hankering for moving their rig will be drooling.

Continue reading our review of the MSI GT72S  Dominator Pro G gaming notebook!!