Feedback

Retail Radeon R9 290X Frequency Variance Issues Still Present

Author:
Manufacturer: Sapphire

Another retail card reveals the results

Since the release of the new AMD Radeon R9 290X and R9 290 graphics cards, we have been very curious about the latest implementation of AMD's PowerTune technology and its scaling of clock frequency as a result of the thermal levels of each graphics card.  In the first article covering this topic, I addressed the questions from AMD's point of view - is this really a "configurable" GPU as AMD claims or are there issues that need to be addressed by the company? 

The biggest problems I found were in the highly variable clock speeds from game to game and from a "cold" GPU to a "hot" GPU.  This affects the way many people in the industry test and benchmark graphics cards as running a game for just a couple of minutes could result in average and reported frame rates that are much higher than what you see 10-20 minutes into gameplay.  This was rarely something that had to be dealt with before (especially on AMD graphics cards) so to many it caught them off-guard.

View Full Size

Because of the new PowerTune technology, as I have discussed several times before, clock speeds are starting off quite high on the R9 290X (at or near the 1000 MHz quoted speed) and then slowly drifting down over time.

Another wrinkle occurred when Tom's Hardware reported that retail graphics cards they had seen were showing markedly lower performance than the reference samples sent to reviewers.  As a result, AMD quickly released a new driver that attempted to address the problem by normalizing to fan speeds (RPM) rather than fan voltage (percentage).  The result was consistent fan speeds on different cards and thus much closer performance.

However, with all that being said, I was still testing retail AMD Radeon R9 290X and R9 290 cards that were PURCHASED rather than sampled, to keep tabs on the situation. 

Continue reading our article on retail variance in R9 290X clock speeds and performance!!

After picking up a retail, off the shelf Sapphire branded Radeon R9 290X, I set out to do more testing.  This time though, rather than simply game for a 5 minute window, I decided to loop gameplay in Metro: Last Light for 25 minutes at a resolution of 2560x1440 with Very High quality settings.  The results are you'll see are pretty interesting.  The "reference" card labeled here is the original R9 290X sampled to me from AMD directly. 

Our first set of tests show the default, Quiet mode on the R9 290X.

View Full Size

Click to Enlarge

For the first nearly 3 minutes of game play, both cards are performing identically and are able to stick near the 1.0 GHz clock speed advertised by AMD and partners.  At that point though the blue line, representing the Sapphire R9 290X retail card, starts to drop its clock speed, settling somewhere in the 860 MHz mark.

The green line lasts a bit longer at 1000 MHz until around 250 seconds (just over 4 minute) have elapsed then it too starts to drop in clock speeds.  But, the decrease is not nearly as dramatic - clocks seem to hover in the mid-930 MHz. 

View Full Size

Click to Enlarge

In fact, over the entire 25 minute period (1500 seconds) shown here, the retail R9 290X card averaged 869 MHz (including the time at the beginning at 1.0 GHz) while the reference card sent to us from AMD averaged 930 MHz.  That results in a 6.5% drop in clock speed delta which should almost perfectly match performance differences in games that are GPU limited (most of them.)

View Full Size

Click to Enlarge

The fan speed adjustment made by AMD with the 13.11 V9.2 driver was functioning as planned though - both cards were running at the expected 2200 RPM levels and ramped up nearly identically as well. 

But what changes if we switch over the Uber mode on the R9 290X?  The setting that enabled 55% fan speeds, and with it more noise?

View Full Size

Click to Enlarge

View Full Size

Click to Enlarge

You only see the blue line here from the Sapphire results because it is overwriting the green line of the reference card - both are running at essentially the same performance levels and nearly keep the 1000 MHz frequency across the entirety of the 25 minute gaming period.  The retail card averages 996 MHz while the reference card averages 999 MHz - pretty damn close.

View Full Size

Click to Enlarge

However, what I found very interesting is that these cards did this at different fans speeds.  It would appear that the 13.11 V9.2 driver did NOT normalize the fan speeds for Uber mode as 55% reported on both cards results in fan speeds that differ by about 200 RPM.  That means the blue line, representing our retail card, is going to run louder than the reference card, and not by a tiny margin.

 

The Saga Continues...

As we approach the holiday season, I am once again left with information that paints a bad light on the retail versus sampled R9 290X cards but without enough data to really make any kind of definitive conclusions.  In reality, I would need dozens of R9 290X or R9 290 cards to make a concrete statement on the methods that AMD is employing, but unfortunately my credit card wouldn't appreciate that.

Even though we are only showing a single retail card against a single sampled R9 290X from AMD directly, these reports continue to pile up.  The 6.5% clock speed difference we are seeing seems large enough to warrant concern, but not large enough to start a full-on battle over it. 

View Full Size

My stance on the Hawaii architecture and the new PowerTune technology remains the same even after this new data: AMD needs to define a "base" clock and a "typical" clock that users can expect.  Otherwise, we will continue to see reports and reporting on the variance that exists between retail units.  The quick fix that AMD's driver team implemented to normalize the fan speed on RPM rather than percentage clearly helped, but it has not addressed the issue in total.

Here's hoping AMD comes back from the holiday with some new ideas in mind!

November 26, 2013 | 01:54 PM - Posted by specopsFI (not verified)

The thing is, it's the same with GK110. Depending on the quality of your chip, you will get different amounts of throttling (which I'm calling the drop from max boost because of temp/power limits). Since no two chip are ever equal, both companies' power balancing technologies will unavoidably result in variance in sustainable clock rates.

True, at least Nvidia isn't selling their cards by shouting out their max boost clocks, but if the review card has a better chip than what one ends up buying, the effect is the same - worse performance than expected. Both companies are selling their cards with reviews and now both companies have a "review boost" technology. It's not cool and it makes objective GPU testing a pain, but I suppose it is what it is from now on.

November 26, 2013 | 07:38 PM - Posted by Ryan Shrout

While you are correct that variance occurs on the GeForce GPUs, in my experience it isn't this extreme.

November 28, 2013 | 12:14 AM - Posted by Steve (not verified)

You just don't test this with hundreds of cards. That is what I'm doing day by day so I can tell you that I found some GTX780Ti that 8% slower then the defined reference speed. In my database the average variance is 5% for the GTX780Ti and 3% for the R9-290X. But this is normal, this is how these card designed to work.

November 28, 2013 | 08:41 AM - Posted by John Hendrick (not verified)

Oddly, I have done similar work and I get exactly the opposite. The 780Ti gets no more than 2% variance (I define variance as the full spread in measured frequency about the mean, divided by the mean) - averaged over 72 cards. On The R9-290X I am seeing well over exactly 13.4% ... this is averaged 56 cards.

The R9-290X is clearly running too hot and run smack dab at the very limit of the junction temperature. This card will likely run much slower in Phoenix Arizona that in does in Quebec Canada.

AMD should dial back the specs on the card, whether this is intentional or not, it is not putting them in a good light.

November 28, 2013 | 09:46 AM - Posted by Anonymous (not verified)

Just another Nvidia sponsored article. The sales are down so will see few of this from sites like PCP that can be bought to repeat the same old story. over and over again.

It's very simple if the ambient room temp are not the same in Quebec or Arizona, just adjust the fan speed. Most enthusiast buying $400 and up graphic card knows that and actually very few bought the card with the intention to use the reference cooler if anybody.

November 29, 2013 | 01:08 AM - Posted by Klimax (not verified)

Facts cannot be bought... dear AMD fanboy who's unable to handle facts.

December 2, 2013 | 01:29 PM - Posted by aparsh335i (not verified)

Can you explain why it is that you have had so many of these video cards in your hands for testing? That's over $50,000 of video cards.

November 28, 2013 | 08:42 PM - Posted by Anonymous (not verified)

I have to commend you on a through and honest evaluation of the ATI graphics card. It's extremely hard to find unbiased reviews anymore. Your ability to translate hi-tech information into easy to understand language is quite high! It's extremely important to me, especially in this economy, to only support major manufactures who go out of their way to show integrity in their product.

Thanks for your work!

November 27, 2013 | 02:28 AM - Posted by Sihastru

That's not really true. Kepler has a base clock for every model, and the cards will NEVER go below the base clock (unless you take the cooler of them, or your PSU is faulty). NVIDIA said from day 1 that some cards will BOOST a bit better then others.

But since they don't set the clocks so close to the limit of the GPU, absolutely all cards will achieve the preset BOOST clocks, actually go well beyond that. Some will go a step higher, but it's a small step and performance is about the same and well within the margin of error for the benchmarking process.

AMD on the other hand says the cards have an "up to 1GHz" clockspeed. The retailers sell the cards as having a "1GHz" clockspeed, loosing the "up to" moniker. And in reality the cards will ALWAYS clock lower then 1GHz, in some cases A LOT LOWER, around 650-750MHz, because they throttle so heavily.

And the fact that it's so easy to find retail cards underperforming review cards, does raise an important aspect. Because it points out to the fact that it's not just a small batch problem. And it also raises the question if AMD sent reviewers HAND PICKED cards that performed better then what consumers will get in retail.

But to summarize, NVIDIA sells you a baseline performance that ALL cards will achieve and that's guaranteed, and maybe your store bought one will be a bit faster if you're lucky, while AMD sells you the dream of a top performing card that, it seems, will NEVER be the case of your store bought card because, well, your unlucky.

And of course they didn't send golden samples to reviewers, because they're AMD and only NVIDIA is capable of bad things. And in case you didn't detect it, this was sarcasm.

November 27, 2013 | 04:13 AM - Posted by specopsFI (not verified)

I agree on the clock rate advertisement aspect, as I already wrote. What I don't wholeheartedly agree on is the extent to which advertised clock rates make a difference for sales. I'd say that for such high-end GPUs, the review performance is way more important than stated clock rates on the box. Both Nvidia and AMD have set up their new cards (GPU Boost 2.0 and PowerTune 2.0) so that they perform better on a short benchmark than on a sustained gaming session.

IMO, the biggest issue those technologies raise is in regards to their customizable nature. Based on user choices, one can get wildly different performance levels for these cards. For example, I think it's unfair to test the 290X on über mode (or even worse, with uncapped fan speed) and leave the GK110 cards on their stock power and temperature limits. One can make the choice to get sustainable full performance of the Hawaii cards by making the fan spin faster to keep them from throttling. One can also get the GK110 cards to sustain their maximum boost clock constantly by upping their power and temp limits. So for a comparison test to get the baseline performance differences for these chips on even terms is a really big challenge.

November 26, 2013 | 02:03 PM - Posted by Rick (not verified)

Ryan

I assume you took the sample card apart for pictures? I am also assuming you reapplied TIM?

Did you do that with the retail card? The stop TIM application on gpus usually isn't good.

November 26, 2013 | 07:38 PM - Posted by Ryan Shrout

I didn't do any of that on any cards we have tested.

November 26, 2013 | 02:07 PM - Posted by collie (not verified)

There are 2 big questions I think are on everybody's mind,it's certainly on mine, 1)are these truly flaws in the core or a simple hiccup on the first iteration, it this the kind of thing that (much like the first bulldozer llano and FX chips) shows promise now but by the time of the r9-390x or whatever the next one will be called, all the problems will be worked out? 2)are we expecting to see this hawaii architecture used in the rest of the next generation amd gpu's, like the r7-340, r9-370x etc.

December 16, 2013 | 04:34 AM - Posted by Dan (not verified)

Unfortunately, cards such as Sapphire VaporX should have been available from day one; not the crappy reference cooler with TIM that looks like it was literally thrown on the GPU ...

Lack of attention to detail and quality indicates that AMD does NOT care about it's customers [and I have been one for years].

November 26, 2013 | 02:07 PM - Posted by Rick (not verified)

Sorry to spam the comments

But what is the ASIC quality on the gpus?

November 26, 2013 | 07:41 PM - Posted by Ryan Shrout

I'm not sure I understand the question?

November 27, 2013 | 02:12 AM - Posted by Sihastru

GPUZ has an ASIC Quality option, but it's not really telling anything about the quality of the chip. And nobody actually knows what it means. I have two identical 680's, they have consecutive serial numbers. One has 70-ish ASIC Quality and one has a 90-ish ASIC Quality and the one with a lower number overclocks a lot better.

November 27, 2013 | 02:57 AM - Posted by Anonymous (not verified)

ASIC quality, AFAICT, as reported by GPU-z, deals mostly with impedance of the circuitry.

There are minor differences in the thickness of the metal layers and completeness of the doping of the silicon as well as other factors that alter resistance, conductivity, and capacitance. Lower resistance generates lower heat (of course) and requires lower voltage to operate. This makes for better overclocking - at least at room temperature... Higher maximum conductivity can mean higher resistance at lower voltages, and capacitance can really be ignored - but the relationship of them together is often referred to as 'voltage leakage.'

Higher resistance, as mentioned, can be a sign that the chip can carry higher current and handle higher voltages. This is great for being able to push the chip to the max when cooling isn't a factor... BUT, if cooling is a factor (which it is in all cases - except LN2 or Helium...), then you want lower resistance - and LOWER ASIC quality...

However, if you are using sub-ambient cooling, you want higher ASIC quality - so you can push more voltage and current and achieve higher clocks.

November 27, 2013 | 03:10 AM - Posted by Anonymous (not verified)

Sorry, when I said You wanted "LOWER ASIC quality" for air and HIGHER for LN2, I got those backwards, LOL!

Higher ASIC quality = lower resistance, thus lower voltage, less heat, higher clocks on air, but less voltage & current tolerance.

November 27, 2013 | 08:07 AM - Posted by General lee (not verified)

This is further complicated by chip binning. For example 7970 has four known bins that have different default voltages, based on their ASIC quality number.

up to 2F90 (up to 75% quality) - 1.1750V
up to 34D0 (up to 80% quality) - 1.1125V
up to 3820 (up to 85% quality) - 1.0500V
up to 3A90 (up to 90% quality) - 1.0250V

Pure ASIC quality number is useless if you don't know the bins, at least for non xtreme oc.

December 1, 2013 | 05:59 AM - Posted by Rick (not verified)

So what are the ASIC numbers?

November 26, 2013 | 02:29 PM - Posted by EndTimeKeeper

Hey Ryan Thanks for all the Great info about all these cards if you have the time you might like to check out two of my videos demonstrating my setup and how my R9 290x (BF4 Limited edition from sapphire its the same one that you are using in your article)is running. Thanks again for all the hard work and info.

Here are the links to my videos sorry for them being kind of ruff I don't make alot of videos lol.
1. https://www.youtube.com/watch?v=VicPlkKdRGk
2. https://www.youtube.com/watch?v=Ht7cDpu_PwE

November 26, 2013 | 03:56 PM - Posted by snook

strange, we can all be sure NV just randomly grabs GPUs off a self/production line and sends them to reviewers ;/

variations are too wide for sure.

November 26, 2013 | 07:42 PM - Posted by Ryan Shrout

I'm sure that NVIDIA does some testing before sending out review samples but its all about how they react to that data.  Do they send out 'average' cards, 'above average' or 'f'ing amazing' cards?

November 26, 2013 | 08:29 PM - Posted by Anonymous (not verified)

I look forward to your retail vs press sample AMD/Nvidia shoot out review.

November 27, 2013 | 12:58 AM - Posted by snook

fing amazing. if not they would be extremely foolish for not doing so.

they aren't saints certainly.

December 2, 2013 | 06:18 AM - Posted by Panta

thats why consumers can never trust reviewers OC results.

November 26, 2013 | 04:38 PM - Posted by Anonymous (not verified)

Since PC Per and other tech sites don't want to be labeled as Nvidia leaning fanboys I'm just gonna call this what it is.

AMD, despite delivering a good product, has apparently pulled some shady shit in order to paint their performance numbers better than what can be expected in retail by delivering hand cherry-picked gpu's for preview and review.

I think Gordon Maung at Max PC brought up a very damning point about all of this.

>>>>>>>>To his knowledge, there are numerous reports about retail cards having variance but NOT A SINGLE REPORT of any variance in any of the handpicked cards shipped for review from AMD.<<<<<<<<

If this turns out to be true, AMD better own up to this crap, fire the people responsible and be whole hardheartedly transparent with expected performance for the average user.

I like AMD, but if this is the way they are going to do business, they can kiss my a..

I know someone at AMD is reading this. Wise up, own up, and be transparent. You have a lot more on the line than you think.

November 27, 2013 | 03:03 AM - Posted by Anonymous (not verified)

I don't believe AMD is doing anything untoward in this case. Many people in the owner's forum have reported that they have much lower throttling than even the review samples.

I just think, if anything, AMD may have been more closely monitoring the first produced cards for production issues, then as production ramped up stopped enough attention - something that usually isn't a problem... a card running 5-10C hotter, but still well within tolerances, isn't a problem on other cards...

I think one solution would be change the TIM they're using and try to weed out the variability in the application better. Others have done this with surprisingly good results. A little higher quality TIM has allowed some to not throttle at all with stock fan settings - but most just gain 50MHz or so on the average.

November 26, 2013 | 05:32 PM - Posted by Anonymous (not verified)

Lets not forget he went on a tyraid about how Uber Mode was invalid because its not a "Default" out of the box setting.

He probably making out with Tom Peterson in exchange for the Nvidia adds on his site.

Use chapstick don't want your lips dry'n up.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.