GeForce Hotfix Driver 361.60 Released

Subject: Graphics Cards | January 12, 2016 - 08:11 PM |
Tagged: graphics drivers, graphics driver, nvidia

NVIDIA has been pushing for WHQL certification for their drivers, but sometimes issues slip through QA, both at Microsoft and their own, internal team(s). Sometimes these issues will be fixed in a future release, but sometimes they push out a “HotFix” driver immediately. This is often great for people who experience the problems, but they should not be installed otherwise.

nvidia-2015-bandaid.png

In this case, GeForce Hotfix driver 361.60 fixes two issues. One is listed as “install & clocking related issues,” which refers to the GPU memory clock. According to Manuel Guzman of NVIDIA, some games and software was not causing the driver to fully wake the memory clock to a high-performance state. The other issue is “Crashes in Photoshop & Illustrator,” which fixes blue screen issues in both software, and possibly other programs that use the GPU in similar ways. I've never seen GeForce Driver 361.43 cause a BSOD in Photoshop, but I am a few versions behind with CS5.5.

Download links are available at NVIDIA Support, but unaffected users should just wait for an official driver in case the patch causes other issues, due to its minimal QA.

Source: NVIDIA

Report: NVIDIA Pascal GP104 Discovered, May Not Use HBM

Subject: Graphics Cards | January 11, 2016 - 06:05 PM |
Tagged: rumor, report, pascal, nvidia, HBM2, hbm, GP104

A delivery of GPUs and related test equipment from Taiwan to Banglore has led to speculation about NVIDIA's upcoming GP104 Pascal GPU.

GP104_delivery.png

Image via Zauba.com

How much information can be gleaned from an import shipping manifest (linked here)? The data indicates a chip with a 37.5 x 37.5 mm package and 2152 pins, which is being attributed to the GP104 based on knowledge of “earlier, similar deliveries” (or possible inside information). This has prompted members of the 3dcenter.org forums (German language) to speculate on the use of GDDR5 or GDDR5X memory based on the likelihood of HBM being implemented on a die of this size. 

Of course, NVIDIA has stated that Pascal will implement 3D memory, and the upcoming GP100 will reportedly be on a 55 x 55 mm package using HBM2. Could this be a new, lower-cost part using the existing GDDR5 standard or the faster GDDR5X instead? VideoCardz and WCCFtech have posted stories based on the 3DCenter report, and to quote directly from the VideoCardz post on the subject:

"3DCenter has a theory that GP104 could actually not use HBM, but GDDR5(X) instead. This would rather be a very strange decision, but could NVIDIA possibly make smaller GPU (than GM204) and still accommodate 4 HBM modules? This theory is not taken from the thin air. The GP100 aka the Big Pascal, would supposedly come in 55x55mm BGA package. That’s 10mm more than GM200, which were probably required for additional HBM modules. Of course those numbers are for the whole package (with interposer), not just the GPU."

All of this is a lot to take from a shipping record that might not even be related to an NVIDIA product, but the report has made the rounds at this point so now we’ll just have to wait for new information.

Source: 3DCenter.org
Manufacturer: PC Perspective
Tagged: moores law, gpu, cpu

Are Computers Still Getting Faster?

It looks like CES is starting to wind down, which makes sense because it ended three days ago. Now that we're mostly caught up, I found a new video from The 8-Bit Guy. He doesn't really explain any old technologies in this one. Instead, he poses an open question about computer speed. He was able to have a functional computing experience on a ten-year-old Apple laptop, which made him wonder if the rate of computer advancement is slowing down.

I believe that he (and his guest hosts) made great points, but also missed a few important ones.

One of his main arguments is that software seems to have slowed down relative to hardware. I don't believe that is true, but I believe it's looking in the right area. PCs these days are more than capable of doing just about anything in terms of 2D user interface that we would want to, and do so with a lot of overhead for inefficient platforms and sub-optimal programming (relative to the 80's and 90's at the very least). The areas that require extra horsepower are usually doing large batches of many related tasks. GPUs are key in this area, and they are keeping up as fast as they can, despite some stagnation with fabrication processes and a difficulty (at least before HBM takes hold) in keeping up with memory bandwidth.

For the last five years to ten years or so, CPUs have been evolving toward efficiency as GPUs are being adopted for the tasks that need to scale up. I'm guessing that AMD, when they designed the Bulldozer architecture, hoped that GPUs would have been adopted much more aggressively, but even as graphics devices, they now have a huge effect on Web, UI, and media applications.

google-android-opengl-es-extensions.jpg

These are also tasks that can scale well between devices by lowering resolution (and so forth). The primary thing that a main CPU thread needs to do is figure out the system's state and keep the graphics card fed before the frame-train leaves the station. In my experience, that doesn't scale well (although you can sometimes reduce the amount of tracked objects for games and so forth). Moreover, it is easier to add GPU performance, compared to single-threaded CPU, because increasing frequency and single-threaded IPC should be more complicated than planning out more, duplicated blocks of shaders. These factors combine to give lower-end hardware a similar experience in the most noticeable areas.

So, up to this point, we discussed:

  • Software is often scaling in ways that are GPU (and RAM) limited.
  • CPUs are scaling down in power more than up in performance.
  • GPU-limited tasks can often be approximated with smaller workloads.
    • Software gets heavier, but it doesn't need to be "all the way up" (ex: resolution).
    • Some latencies are hard to notice anyway.

Back to the Original Question

This is where “Are computers still getting faster?” can be open to interpretation.

intel-devilscanyon-overview.JPG

Tasks are diverging from one class of processor into two, and both have separate industries, each with their own, multiple goals. As stated, CPUs are mostly progressing in power efficiency, which extends (an assumed to be) sufficient amount of performance downward to multiple types of devices. GPUs are definitely getting faster, but they can't do everything. At the same time, RAM is plentiful but its contribution to performance can be approximated with paging unused chunks to the hard disk or, more recently on Windows, compressing them in-place. Newer computers with extra RAM won't help as long as any single task only uses a manageable amount of it -- unless it's seen from a viewpoint that cares about multi-tasking.

In short, computers are still progressing, but the paths are now forked and winding.

AMD Announces Radeon R9 Nano Price Cut

Subject: Graphics Cards | January 11, 2016 - 08:32 AM |
Tagged: radeon, r9 nano, R9 Fury X, price cut, press release, amd

AMD has announced a price cut for the Radeon R9 Nano, which will now have a suggested price of $499, a $150 drop from the original $649 MSRP.

R9_Nano_PCPer.jpg

VideoCardz had the story this morning, quoting the official press release from AMD:

"This past September, the AMD Radeon™ R9 Nano graphics card launched to rave reviews, claiming the title of the world’s fastest and most power efficient Mini ITX gaming card, powered by the world’s most advanced and innovative GPU with on-chip High-Bandwidth Memory (HBM) for incredible 4K gaming performance. There was nothing like it ever seen before, and today, it remains in a class of its own, delivering smooth, true-to-life, premium 4K and VR gaming in a small form factor PC.

At a peak power of 175W and in a 6-inch form factor, it drives levels of performance that are on par with larger, more power-hungry GPUs from competitors, and blows away Mini ITX competitors with up to 30 percent better performance than the GTX 970 Mini ITX.

As of today, 11 January, this small card will have an even bigger impact on gamers around the world as AMD announces a change in the AMD Radeon™ R9 Nano graphics card’s SEP from $649 to $499. At the new price, the AMD Radeon™ R9 Nano graphics card will be more accessible than ever before, delivering incredible performance and leading technologies, with unbelievable efficiency in an astoundingly small form factor that puts it in a class all of its own."

The R9 Nano (reviewed here) had been the most interesting GPU released in 2015 to the team at PC Perspective. It was a compelling product for its tiny size, great performance, and high power efficiency, but the dialogue here probably mirrored that of a lot of potential buyers; for the price of a Fury X, did it make sense to buy the Nano? It was all going to depend on need, but very few enclosures on the market do not support a full-length GPU, as we discovered when testing out small R9 Nano builds.

Now that the price will move down $150 it becomes an easier choice: $499 will buy you a full R9 Fury X core for $150 less. The performance of a Fury X is only a few percentage points higher than the slighly lower-clocked Nano, so you're now getting most of the way there for much less. We have seen some R9 Fury X cards selling for $599, but even at $100 more would you buy the Fury X over a Nano? If nothing else the lower price makes the conversation a lot more interesting.

Source: VideoCardz

Far Cry Primal System Requirements Slightly Lower than 4?

Subject: Graphics Cards, Processors | January 9, 2016 - 07:00 AM |
Tagged: ubisoft, quad-core, pc gaming, far cry primal, dual-core

If you remember back when Far Cry 4 launched, it required a quad-core processor. It would block your attempts to launch the game unless it detected four CPU threads, either native quad-core or dual-core with two SMT threads per core. This has naturally been hacked around by the PC gaming community, but it is not supported by Ubisoft. It's also, apparently, a bad experience.

ubisoft-2015-farcryprimal.jpg

The follow-up, Far Cry Primal, will be released in late February. Oddly enough, it has similar, but maybe slightly lower, system requirements. I'll list them, and highlight the differences.

Minimum:

  • 64-bit Windows 7, 8.1, or 10 (basically unchanged from 4)
  • Intel Core i3-550 (down from i5-750)
    • or AMD Phenom II X4 955 (unchanged from 4)
  • 4GB RAM (unchanged from 4)
  • 1GB NVIDIA GTX 460 (unchanged from 4)
    • or 1GB AMD Radeon HD 5770 (down from HD 5850)
  • 20GB HDD Space (down from 30GB)

Recommended:

  • Intel Core i7-2600K (up from i5-2400S)
    • or AMD FX-8350 (unchanged from 4)
  • 8GB of RAM (unchanged from 4)
  • NVIDIA GeForce GTX 780 (up from GTX 680)
    • or AMD Radeon R9 280X (down from R9 290X)

While the CPU is interesting, the opposing directions of the recommended GPU is fascinating. Either the parts are within Ubisoft's QA margin of error, or they increased the GPU load, but were able to optimize AMD better than Far Cry 4, which was a net gain in performance (and explains the slight bump in CPU power required to feed the extra content). Of course, either way is just a guess.

Back on the CPU topic though, I would be interested to see the performance of Pentium Anniversary Edition parts. I wonder whether they removed the two-thread lock, and, especially if hacks are still required, whether it is playable anyway.

That is, in a month and a half.

Source: Ubisoft

CES 2016: AMD Shows Polaris Architecture and HDMI FreeSync Displays

Subject: Graphics Cards, Displays | January 8, 2016 - 02:56 PM |
Tagged: video, Polaris, hdmi, freesync, CES 2016, CES, amd

At its suite at CES this year, AMD was showing off a couple of new technologies. First, we got to see the upcoming Polaris GPU architecture in action running Star Wars Battlefront with some power meters hooked up. This is a similar demo to what I saw in Sonoma back in December, and it compares an upcoming Polaris GPU against the NVIDIA GTX 950. The result: total system power of just 86 watts on the AMD GPU and over 150 watts on the NVIDIA GPU.

Another new development from AMD on the FreeSync side of things was HDMI integration. The company took time at CES to showcase a pair of new HDMI-enabled monitors working with FreeSync variable refresh rate technology. 

Coverage of CES 2016 is brought to you by Logitech!

PC Perspective's CES 2016 coverage is sponsored by Logitech.

Follow all of our coverage of the show at http://pcper.com/ces!

Source: AMD

Intel Pushes Device IDs of Kaby Lake GPUs

Subject: Graphics Cards, Processors | January 8, 2016 - 02:38 AM |
Tagged: Intel, kaby lake, linux, mesa

Quick post about something that came to light over at Phoronix. Someone noticed that Intel published a handful of PCI device IDs for graphics processors to Mesa and libdrm. It will take a few months for graphics drivers to catch up, although this suggests that Kaby Lake will be releasing relatively soon.

intel-2015-linux-driver-mesa.png

It also gives us hints about what Kaby Lake will be. Of the published batch, there will be six tiers of performance: GT1 has five IDs, GT1.5 has three IDs, GT2 has six IDs, GT2F has one ID, GT3 has three IDs, and GT4 has four IDs. Adding them up, we see that Intel plans 22 GPU devices. The Phoronix post lists what those device IDs are, but that is probably not interesting for our readers. Whether some of those devices overlap in performance or numbering is unclear, but it would make sense given how few SKUs Intel usually provides. I have zero experience in GPU driver development.

Source: Phoronix

AMD Radeon Software Crimson Edition 16.1 Hotfix arrives

Subject: Graphics Cards | January 7, 2016 - 06:55 PM |
Tagged: crimson, amd

That's right ladies and germs, not even a full week into 2016 and AMD has a new driver for you, or at least a hotfix version of Crimson 16.1.  If you are playing Elite:Dangerous, Fallout 4 or Just Cause 3 there are a number of fixes to known issues from the original 12.1 which will make it worth picking up as soon as you can.  So far in testing, game profiles do seem to last through an update so if you did take advantage of the game specific overclocking settings you should not lose all your hard work.

amd-radeon-crimson-graphics-driver-15-12-is-up-for-grabs-get-it-now-497817-2.jpg

They have also included fixes specific to VSR, odd HDMI setups and flickering in Freesync displays.  As always there are a few bugs still to be ironed out, which is why you should fill in the bug reporting tool at the bottom of the driver page instead of just throwing things at random passersby ... as fun and entertaining as that may be.

Source: AMD

At just over $200, can the XFX R9 380X Double Dissipation XXX OC 4GB unseat a GTX 960?

Subject: Graphics Cards | January 7, 2016 - 02:36 PM |
Tagged: XFX R9 380X Double Dissipation XXX OC 4GB, xfx, amd, 380x

Take a quick break from reading about the soon to be released technology at CES for a look at a GPU you can buy right now.  The XFX DD XXX series has been around for a few generations and the XFX R9 380X Double Dissipation XXX OC 4GB sports the same custom DD cooler you would expect.  The factory overclock is quite modest, 20MHz on the GPU taking it to 990MHz and retaining the default 5.7GHz memory clock.  Of course [H]ard|OCP were not going to leave that as is, they hit a 1040MHz core and 6.1GHz memory clock thanks to the custom cooling on the card, although with no way to adjust voltage they felt this card could be capable of more if that feature was added to the card.  Read on to see how this card compares against the ASUS STRIX GTX 960 DCU II OC in this ~$220 GPU showdown.

1451965220DbOIiYuZnI_1_1.jpg

"On our test bench today is the XFX R9 380X Double Dissipation XXX OC 4GB video card. It features the latest Ghost Thermal 3.0 cooling technology from XFX and a factory overclock. We will compare it to the ASUS STRIX GTX 960 DCU II OC 4GB in a battle of the $229 price point video cards to determine the better overall value."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: [H]ard|OCP

CES 2016: Rise of the Tomb Raider NVIDIA Bundle

Subject: Graphics Cards, Shows and Expos | January 7, 2016 - 02:03 PM |
Tagged: square enix, nvidia, CES 2016, CES

NVIDIA has just announced a new game bundle. If you purchase an NVIDIA GeForce GTX 970, GTX 980 desktop or mobile, GTX 980 Ti, GTX 980M, or GTX 970M, then you will receive a free copy of Rise of the Tomb Raider. As always, make sure the retailer is selling the participating card. If the product has a download code, it will be specially marked. NVIDIA will not upgrade non-participating stock to the bundle.

nvidia-2016-tombraider-glp-header.jpg

Rise of the Tomb Raider will go live on January 29th. It was originally released in November as an Xbox One timed exclusive. It will also arrive on the PlayStation 4, but not until “holiday,” which is probably around Q4 (or maybe late Q3).

If you purchase the bundle, then you graphics card will obviously be powerful enough to run the game. At a minimum, you will require a GeForce GTX 650 (2GB) or an AMD HD 7770 (2GB). The CPU needs are light too, requiring just a Sandy Bridge Core i3 (Intel Core i3-2100) or AMD's equivalent. Probably the only concern would be the minimum of 6GB system RAM, which also requires a 64-bit operating system. Now that the Xbox 360 and PlayStation 3 have been deprecated, 32-bit gaming will be increasingly rare for “AAA” titles. That said, we've been ramping up to 64-bit for the last decade. one of the first games that supported x86-64 was Unreal Tournament 2004.

The Rise of the Tomb Raider NVIDIA bundle starts today.

Coverage of CES 2016 is brought to you by Logitech!

PC Perspective's CES 2016 coverage is sponsored by Logitech.

Follow all of our coverage of the show at http://pcper.com/ces!

Source: NVIDIA