AMD Releases Catalyst 13.11 Beta 9.2 Driver To Correct Performance Variance Issue of R9 290 Series Graphics Cards

Subject: Graphics Cards, Cases and Cooling | November 8, 2013 - 02:41 AM |
Tagged: R9 290X, powertune, hawaii, graphics drivers, gpu, GCN, catalyst 13.11 beta, amd, 290x

AMD recently launched its 290X graphics card, which is the new high-end single GPU solution using a GCN-based Hawaii architecture. The new GPU is rather large and incorporates an updated version of AMD's PowerTune technology to automatically adjust clockspeeds based on temperature and a maximum fan speed of 40%. Unfortunately, it seems that some 290X cards available at retail exhibited performance characteristics that varied from review units.

Retail versus Review Sample Performance Variance Testing.jpg

AMD has looked into the issue and released the following statement in response to the performance variances (which PC Perspective is looking into as well).

Hello, We've identified that there's variability in fan speeds across AMD R9 290 series boards. This variability in fan speed translates into variability of the cooling capacity of the fan-sink. The flexibility of AMD PowerTune technology enables us to correct this variability in a driver update. This update will normalize the fan RPMs to the correct values.

The correct target RPM values are 2200RPM for the AMD Radeon R9 290X "Quiet mode", and 2650RPM for the R9 290. You can verify these in GPU-Z. If you're working on stories relating to R9 290 series products, please use this driver as it will reduce any variability in fan speeds. This driver will be posted publicly tonight.

From the AMD statement, it seems to be an issue with fan speeds from card to card causing the performance variances. With a GPU that is rated to run at up to 95C, a fan limited to 40% maximum, and dynamic clockspeeds, it is only natural that cards could perform differently, especially if case airflow is not up to par. On the other hand, the specific issue pointed out by other technology review sites (per my understanding, it was initially Tom's Hardware that reported on the retail vs review sample variance) is  an issue where the 40% maximum on certain cards is not actually the RPM target that AMD intended.

AMD intended for the Radeon R9 290X's fan to run at 2200RPM (40%) in Quiet Mode and the fan on the R9 290 (which has a maximum fan speed percentage of 47%) to spin at 2650 RPM in Quiet Mode. However, some cards 40% values are not actually hitting those intended RPMs, which is causing performance differences due to cooling and PowerTune adjusting the clockspeeds accordingly.

Luckily, the issue is being worked on by AMD, and it is reportedly rectified by a driver update. The driver update ensures that the fans are actually spinning at the intended speed when set to the 40% (R9 290X) or 47% (R9 290) values in Catalyst Control Center. The new driver, which includes the fix, is version Catalyst 13.11 Beta 9.2 and is available for download now. 

If you are running a R9 290 or R9 290X in your system, you should consider updating to the latest driver to ensure you are getting the cooling (and as a result gaming) performance you are supposed to be getting.

Catalyst 13.11 Beta 9.2 is available from the AMD website.

Also read:

Stay tuned to PC Perspective for more information on the Radeon R9 290 series GPU performance variance issue as it develops.

Image credit: Ryan Shrout (PC Perspective).

Source: AMD

EVGA Outfits GTX 780 With Hydro Copper Water Block

Subject: Graphics Cards | June 1, 2013 - 01:38 PM |
Tagged: watercooling, nvidia, hydro copper, gtx 780, gpu, gk110, evga

EVGA GTX 780 Hydro Copper GPUs

While NVIDIA restricted partners from going with aftermarket coolers on the company's GTX TITAN graphics card, the recently released NVIDIA GTX 780 does not appear to have the same limits placed upon it. As such, many manufacturers will be releasing GTX 780 graphics cards with custom coolers. One such design that caught my attention was the Hydro Copper full cover waterblock from EVGA.

EVGA GTX 780 with Hydro Copper Water Block (2).jpg

This new cooler will be used on at least two upcoming EVGA graphics cards, the GTX 780 and GTX 780 Classified. EVGA has not yet announced clockspeeds or pricing for the Classified edition, but the GTX 780 Hydro Copper will be a GTX 780 GPU clocked at 980 MHz base and 1033 MHz boost. The 3GB of GDDR5 memory is stock clocked at 6008 MHz, however. It uses a single 8-pin and a single 6-pin PCI-E power connector. This card is selling for around $799 at retailers such as Newegg.

The GTX 780 Classified Hydro Copper will have a factory overclocked GTX 780 GPU and 3GB of GDDR5 memory at 6008 MHz, but beyond that details are scarce. The 8+8-pin PCI-E power connectors do suggest a healthy overclock (or at least that users will be able to push the cards after they get them).

Both the GTX 780 and GTX 780 Classified Hydro Copper graphics cards feature two DL-DVI, one HDMI, and one DisplayPort video outputs.

EVGA GTX 780 Classified with Hydro Copper Water Block (1).jpg

The Hydro Copper cooler itself is the really interesting bit about these cards though. It is a single slot, full cover waterblock that will cool the entire graphics card (GPU, VRM, Memory, ect). It has two inlet/outlet ports that can be swapped around to accommodate SLI setups or other custom water tube routing. A configurable LED-backlit EVGA logo adorns the side of the card and can be controlled in software. A 0.25 x 0.35 pin matrix is used in the portion of the block above the GPU to increase the surface area and aid in cooling. Unfortunately, while the card and cooler are single slot, you will actually need two case PCI expansion slots due to the two DL-DVI connectors.

It looks like a neat card, and it should perform well. I'm looking forward to seeing reviews of the card and how the cooler holds up to overclocking. Buying an overclocked card with a pre-installed waterblock is not for everyone but having a water cooled GPU with a warranty will be worth it more than pairing a stock card with a custom block.

Source: EVGA

AMD Catalyst 13.6 Beta Drivers For Windows and Linux Now Available

Subject: Graphics Cards | May 28, 2013 - 11:32 PM |
Tagged: gpu, drivers, catalyst 13.6 beta, beta, amd

AMD has released its Catalyst 13.6 beta graphics driver, and it fixes a number of issues under both Windows 8 and Linux. The new beta driver is also compatible with the existing Catalyst 13.5 CAP1 (Catalyst Application Profile) which improves performance of several PC games.

As far as the Windows version of the graphics driver, Catalyst 13.6 adds OpenCL GPU acceleration support to Adobe's Premiere Pro CC software and enables AMD Wireless Display technology on systems with the company's A-Series APUs and either Broadcom or Atheros Wi-Fi chipsets. AMD has also made a couple of tweaks to its Enduro technology, including correctly identifying when a Metro app idles and offloading the corresponding GPU tasks to integrated graphics instead of a discrete card. The new beta driver also resolves an issue with audio dropout over HDMI.

AMD Catalyst Drivers.jpg

On the Linux side of things, Catalyst 13.6 beta adds support for the following when using AMD's A10, A8, A6, and A4 APUs:

  • Ubuntu 13.04
  • Xserver 1.14
  • GLX_EXT_buffer age

The driver fixes several bugs as well, including resolving black screen and corruption issues under TF2, an issue with OpenGL applications and VSYNC, and UVD playback issues where the taskbar would disappear and/or the system would experience a noticeable performance drop while playing a UVD in XBMC.

You can grab the new beta driver from the AMD website.

Source: AMD

Bad news GPU fans, prices may be climbing

Subject: General Tech | April 3, 2013 - 01:21 PM |
Tagged: gpu, DRAM, ddr3, price increase

It has taken a while but the climbing price of memory is about to have an effect on the price you pay for your next GPU.  DigiTimes does specifically mention DDR3 but as both GDDR4 and GDDR5 are based off of DDR3 they will suffer the same price increases.  You can expect to see the new prices last as part of the reason for the increase in the price of RAM is the decrease in sales volume.  AMD may be hit harder overall than NVIDIA as they tend to put more memory on their cards and buyers of value cards might see the biggest percentage increase as those cards still sport 1GB or more of memory.

Money.jpg

"Since DDR3 memory prices have recently risen by more than 10%, the sources believe the graphics cards are unlikely to see their prices return to previous levels within the next six months unless GPU makers decide to offer promotions for specific models or launch next-generation products."

Here is some more Tech News from around the web:

Tech Talk

Source: DigiTimes

Grab a sprite and take a graphical trip down memory lane

Subject: General Tech | March 27, 2013 - 01:21 PM |
Tagged: gpu, history, get off my lawn

TechSpot has jsut published an article looking at the history of the GPU over the past decades, from the first NTSC capable cards, through the golden 3DFX years straight through to the modern GPGPU.  There have been a lot of standards over the years such as MDA, CGA and EGA as well as different interfaces like ISA, the graphic card specific AGP to our current PCIe standard.  The first article in this four part series takes us from 1976 through to 1995 and the birth of the Voodoo series of accelerators.  Read on to bring back memories or perhaps to encounter some of this history for the first time.

TS_glquake.jpg

"The evolution of the modern graphics processor begins with the introduction of the first 3D add-in cards in 1995, followed by the widespread adoption of the 32-bit operating systems and the affordable personal computer. While 3D graphics turned a fairly dull PC industry into a light and magic show, they owe their existence to generations of innovative endeavour. Over the next few weeks we'll be taking an extensive look at the history of the GPU, going from the early days of 3D consumer graphics, to the 3Dfx Voodoo game-changer, the industry's consolidation at the turn of the century, and today's modern GPGPU."

Here is some more Tech News from around the web:

Tech Talk

Source: TechSpot
Author:
Manufacturer: PC Perspective

In case you missed it...

UPDATE: We have now published full details on our Frame Rating capture and analysis system as well as an entire host of benchmark results.  Please check it out!!

In one of the last pages of our recent NVIDIA GeForce GTX TITAN graphics card review we included an update to our Frame Rating graphics performance metric that details the testing method in more detail and showed results for the first time.  Because it was buried so far into the article, I thought it was worth posting this information here as a separate article to solict feedback from readers and help guide the discussion forward without getting lost in the TITAN shuffle.  If you already read that page of our TITAN review, nothing new is included below. 

I am still planning a full article based on these results sooner rather than later; for now, please leave me your thoughts, comments, ideas and criticisms in the comments below!


Why are you not testing CrossFire??

If you haven't been following our sequence of stories that investigates a completely new testing methodology we are calling "frame rating", then you are really missing out.  (Part 1 is here, part 2 is here.)  The basic premise of Frame Rating is that the performance metrics that the industry is gathering using FRAPS are inaccurate in many cases and do not properly reflect the real-world gaming experience the user has.

Because of that, we are working on another method that uses high-end dual-link DVI capture equipment to directly record the raw output from the graphics card with an overlay technology that allows us to measure frame rates as they are presented on the screen, not as they are presented to the FRAPS software sub-system.  With these tools we can measure average frame rates, frame times and stutter, all in a way that reflects exactly what the viewer sees from the game.

We aren't ready to show our full sets of results yet (soon!) but the problems lie in that AMD's CrossFire technology shows severe performance degradations when viewed under the Frame Rating microscope that do not show up nearly as dramatically under FRAPS.  As such, I decided that it was simply irresponsible of me to present data to readers that I would then immediately refute on the final pages of this review (Editor: referencing the GTX TITAN article linked above.) - it would be a waste of time for the reader and people that skip only to the performance graphs wouldn't know our theory on why the results displayed were invalid.

Many other sites will use FRAPS, will use CrossFire, and there is nothing wrong with that at all.  They are simply presenting data that they believe to be true based on the tools at their disposal.  More data is always better. 

Here are these results and our discussion.  I decided to use the most popular game out today, Battlefield 3 and please keep in mind this is NOT the worst case scenario for AMD CrossFire in any way.  I tested the Radeon HD 7970 GHz Edition in single and CrossFire configurations as well as the GeForce GTX 680 and SLI.  To gather results I used two processes:

  1. Run FRAPS while running through a repeatable section and record frame rates and frame times for 60 seconds
  2. Run our Frame Rating capture system with a special overlay that allows us to measure frame rates and frame times with post processing.

Here is an example of what the overlay looks like in Battlefield 3.

fr_sli_1.jpg

Frame Rating capture on GeForce GTX 680s in SLI - Click to Enlarge

The column on the left is actually the visuals of an overlay that is applied to each and every frame of the game early in the rendering process.  A solid color is added to the PRESENT call (more details to come later) for each individual frame.  As you know, when you are playing a game, multiple frames will make it on any single 60 Hz cycle of your monitor and because of that you get a succession of colors on the left hand side.

By measuring the pixel height of those colored columns, and knowing the order in which they should appear beforehand, we can gather the same data that FRAPS does but our results are seen AFTER any driver optimizations and DX changes the game might make.

fr_cf_1.jpg

Frame Rating capture on Radeon HD 7970 CrossFire - Click to Enlarge

Here you see a very similar screenshot running on CrossFire.  Notice the thin silver band between the maroon and purple?  That is a complete frame according to FRAPS and most reviews.  Not to us - we think that frame rendered is almost useless. 

Continue reading our 3rd part in a series of Frame Rating and to see our first performance results!!

AMD Releases Catalyst 13.2 Beta GPU Driver With Optimizations For Crysis 3

Subject: Graphics Cards | January 31, 2013 - 08:04 AM |
Tagged: PC, gaming, amd, graphics drivers, gpu, Crysis 3, catalyst

The Crysis 3 beta was launched January 29th, and AMD came prepared with its new Catalyst 13.2 beta driver. In addition to the improvements rolled into the Catalyst 13.1 WHQL graphics driver, Catalyst 13.2 beta features performance improvements in a number of games.

Foremost, AMD focused on optimizing the drivers for the Crysis 3 beta. With the new 13.2 beta drivers, gamers will see a 15% performance improvement in Crysis 3 when using high MSAA settings. AMD has also committed itself to future tweaks to improve Crysis 3 performance when using both single and CrossFire graphics configurations. The driver also allows for a 10% improvement in CrossFire performance in Crytek’s Crysis 2 and a 50% performance boost in DMC: Devil May Cry when running a single AMD GPU. Reportedly, the new beta driver also reduces latency issues in Skyrim, Borderlands 2, and Guild Wars 2. Finally, the 13.2 beta driver resolves a texture filtering issue when running DirectX 9.0c games.

For more details on the driver, AMD has posted the change log on its blog as well as suggested image quality settings for AMD cards running the Crysis 3 beta.

Now watch: PC Perspective live streams the Crysis 3 Multiplayer Beta.

 

Source: AMD

AMD Releases Catalyst 13.1 GPU Drivers With Various Tweaks for Radeon HD 7000 Series

Subject: Graphics Cards | January 18, 2013 - 11:33 AM |
Tagged: Radeon HD 7000, gpu, drivers, catalyst 13.1, amd

AMD recently released a new set of Catalyst graphics card drivers with Catalyst 13.1. The new drivers are WHQL (Microsoft certified) and incorporate all of the fixes contained in the 12.11 beta 11 drivers. The Radeon HD 7000 series will see the majority of the performance and stability tweaks with 13.1. Additionally, the Catalyst 13.1 suite includes a new 3D settings interface in Catalyst Control Center that allows per-application profile management. The Linux version of the Catalyst 13.1 drivers now officially support Ubuntu 12.10 as well.

amd catalyst.jpg

Some of the notable performance tweaks for the HD 7000 series include:

  • CrossFire scaling performance in Call of Duty: Black Ops II improvements.
  • Up to a 25% increase in Far Cry 3 when using 8X MSAA.
  • An 8% performance increase in Sleeping Dogs and StarCraft II.
  • A 5% improvement in Max Payne 3.

 

New 3D Settings UI in Catalyst.jpg

Beyond the performance increases, AMD has fixes several bugs with the latest drivers. Some of the noteworthy fixes include:

  • Fixed a system hang on X58 and X79 chipset-based systems using HD 7000-series GPUs.
  • Fixed an intermittent hang with HD 7000-series GPUs in CrossFireX and Eyefinity configurations.
  • Resolved a system hang in Dishonored on 5000 and 6000 series graphics cards.
  • Resolved a video issue with WMP Classic Home Cinema.
  • Added Super Sample Anti-Aliasing support in the OpenGL driver.

AMD has also released a new standalone un-installation utility that will reportedly clean your system of AMD graphics card drivers to make way for newer versions. That utility can be downloaded here.

If you have a Radeon HD 7000-series card, it would be worth it to update your drivers ASAP. You can download the Catalyst 13.1 drivers on the AMD website.

You can find a full list of the performance tweaks and bug fixes in the Catalyst 13.1 release notes.

Source: AMD
Author:
Manufacturer: PC Perspective

Another update

In our previous article and video, I introduced you to our upcoming testing methodology for evaluating graphics cards based not only frame rates but on frame smoothness and the efficiency of those frame rates.  I showed off some of the new hardware we are using for this process and detailed how direct capture of graphics card output allows us to find interesting frame and animation anomalies using some Photoshop still frames.

d31.jpg

Today we are taking that a step further and looking at a couple of captured videos that demonstrate a "stutter" and walking you through, frame by frame, how we can detect, visualize and even start to measure them.

dis1.jpg

This video takes a couple of examples of stutter in games, DiRT 3 and Dishonored to be exact, and shows what they look like in real time, at 25% speed and then finally in a much more detailed frame-by-frame analysis.

 

Video Loading...

 

Obviously this is just a couple instances of what a stutter is and there are often times less apparent in-game stutters that are even harder to see in video playback.  Not to worry - this capture method is capable of seeing those issues as well and we plan on diving into the "micro" level as well shortly.

We aren't going to start talking about whose card and what driver is being used yet and I know that there are still a lot of questions to be answered on this topic.  You will be hearing more quite soon from us and I thank you all for your comments, critiques and support.

Let me know below what you thought of this video and any questions that you might have. 

Author:
Manufacturer: PC Perspective

A change is coming in 2013

If the new year will bring us anything, it looks like it might be the end of using "FPS" as the primary measuring tool for graphics performance on PCs.  A long, long time ago we started with simple "time demos" that recorded rendered frames in a game like Quake and then played them back as quickly as possible on a test system.  The lone result was given as time, in seconds, and was then converted to an average frame rate having known the total number of frames recorded to start with.

More recently we saw a transition to frame rates over time and the advent frame time graphs like the ones we have been using in our graphics reviews on PC Perspective. This expanded the amount of data required to get an accurate picture of graphics and gaming performance but it was indeed more accurate, giving us a more clear image of how GPUs (and CPUs and systems for that matter) performed in games.

And even though the idea of frame times have been around just a long, not many people were interested in getting into that detail level until this past year.  A frame time is the amount of time each frame takes to render, usually listed in milliseconds, and could range from 5ms to 50ms depending on performance.  For a reference, 120 FPS equates to an average of 8.3ms, 60 FPS is 16.6ms and 30 FPS is 33.3ms.  But rather than average those out by each second of time, what if you looked at each frame individually?

Video Loading...

Scott over at Tech Report started doing that this past year and found some interesting results.  I encourage all of our readers to follow up on what he has been doing as I think you'll find it incredibly educational and interesting. 

Through emails and tweets many PC Perspective readers have been asking for our take on it, why we weren't testing graphics cards in the same fashion yet, etc.  I've stayed quiet about it simply because we were working on quite a few different angles on our side and I wasn't ready to share results.  I am still not ready to share the glut of our information yet but I am ready to start the discussion and I hope our community find its compelling and offers some feedback.

card.jpg

At the heart of our unique GPU testing method is this card, a high-end dual-link DVI capture card capable of handling 2560x1600 resolutions at 60 Hz.  Essentially this card will act as a monitor to our GPU test bed and allow us to capture the actual display output that reaches the gamer's eyes.  This method is the best possible way to measure frame rates, frame times, stutter, runts, smoothness, and any other graphics-related metrics.

Using that recorded footage, sometimes reaching 400 MB/s of consistent writes at high resolutions, we can then analyze the frames one by one, though with the help of some additional software.  There are a lot of details that I am glossing over including the need for perfectly synced frame rates, having absolutely zero dropped frames in the recording, analyzing, etc, but trust me when I say we have been spending a lot of time on this. 

Continue reading our editorial on Frame Rating: A New Graphics Performance Metric.