Feedback

Retail Radeon R9 290X Frequency Variance Issues Still Present

Author:
Manufacturer: Sapphire

Another retail card reveals the results

Since the release of the new AMD Radeon R9 290X and R9 290 graphics cards, we have been very curious about the latest implementation of AMD's PowerTune technology and its scaling of clock frequency as a result of the thermal levels of each graphics card.  In the first article covering this topic, I addressed the questions from AMD's point of view - is this really a "configurable" GPU as AMD claims or are there issues that need to be addressed by the company? 

The biggest problems I found were in the highly variable clock speeds from game to game and from a "cold" GPU to a "hot" GPU.  This affects the way many people in the industry test and benchmark graphics cards as running a game for just a couple of minutes could result in average and reported frame rates that are much higher than what you see 10-20 minutes into gameplay.  This was rarely something that had to be dealt with before (especially on AMD graphics cards) so to many it caught them off-guard.

View Full Size

Because of the new PowerTune technology, as I have discussed several times before, clock speeds are starting off quite high on the R9 290X (at or near the 1000 MHz quoted speed) and then slowly drifting down over time.

Another wrinkle occurred when Tom's Hardware reported that retail graphics cards they had seen were showing markedly lower performance than the reference samples sent to reviewers.  As a result, AMD quickly released a new driver that attempted to address the problem by normalizing to fan speeds (RPM) rather than fan voltage (percentage).  The result was consistent fan speeds on different cards and thus much closer performance.

However, with all that being said, I was still testing retail AMD Radeon R9 290X and R9 290 cards that were PURCHASED rather than sampled, to keep tabs on the situation. 

Continue reading our article on retail variance in R9 290X clock speeds and performance!!

After picking up a retail, off the shelf Sapphire branded Radeon R9 290X, I set out to do more testing.  This time though, rather than simply game for a 5 minute window, I decided to loop gameplay in Metro: Last Light for 25 minutes at a resolution of 2560x1440 with Very High quality settings.  The results are you'll see are pretty interesting.  The "reference" card labeled here is the original R9 290X sampled to me from AMD directly. 

Our first set of tests show the default, Quiet mode on the R9 290X.

View Full Size

Click to Enlarge

For the first nearly 3 minutes of game play, both cards are performing identically and are able to stick near the 1.0 GHz clock speed advertised by AMD and partners.  At that point though the blue line, representing the Sapphire R9 290X retail card, starts to drop its clock speed, settling somewhere in the 860 MHz mark.

The green line lasts a bit longer at 1000 MHz until around 250 seconds (just over 4 minute) have elapsed then it too starts to drop in clock speeds.  But, the decrease is not nearly as dramatic - clocks seem to hover in the mid-930 MHz. 

View Full Size

Click to Enlarge

In fact, over the entire 25 minute period (1500 seconds) shown here, the retail R9 290X card averaged 869 MHz (including the time at the beginning at 1.0 GHz) while the reference card sent to us from AMD averaged 930 MHz.  That results in a 6.5% drop in clock speed delta which should almost perfectly match performance differences in games that are GPU limited (most of them.)

View Full Size

Click to Enlarge

The fan speed adjustment made by AMD with the 13.11 V9.2 driver was functioning as planned though - both cards were running at the expected 2200 RPM levels and ramped up nearly identically as well. 

But what changes if we switch over the Uber mode on the R9 290X?  The setting that enabled 55% fan speeds, and with it more noise?

View Full Size

Click to Enlarge

View Full Size

Click to Enlarge

You only see the blue line here from the Sapphire results because it is overwriting the green line of the reference card - both are running at essentially the same performance levels and nearly keep the 1000 MHz frequency across the entirety of the 25 minute gaming period.  The retail card averages 996 MHz while the reference card averages 999 MHz - pretty damn close.

View Full Size

Click to Enlarge

However, what I found very interesting is that these cards did this at different fans speeds.  It would appear that the 13.11 V9.2 driver did NOT normalize the fan speeds for Uber mode as 55% reported on both cards results in fan speeds that differ by about 200 RPM.  That means the blue line, representing our retail card, is going to run louder than the reference card, and not by a tiny margin.

 

The Saga Continues...

As we approach the holiday season, I am once again left with information that paints a bad light on the retail versus sampled R9 290X cards but without enough data to really make any kind of definitive conclusions.  In reality, I would need dozens of R9 290X or R9 290 cards to make a concrete statement on the methods that AMD is employing, but unfortunately my credit card wouldn't appreciate that.

Even though we are only showing a single retail card against a single sampled R9 290X from AMD directly, these reports continue to pile up.  The 6.5% clock speed difference we are seeing seems large enough to warrant concern, but not large enough to start a full-on battle over it. 

View Full Size

My stance on the Hawaii architecture and the new PowerTune technology remains the same even after this new data: AMD needs to define a "base" clock and a "typical" clock that users can expect.  Otherwise, we will continue to see reports and reporting on the variance that exists between retail units.  The quick fix that AMD's driver team implemented to normalize the fan speed on RPM rather than percentage clearly helped, but it has not addressed the issue in total.

Here's hoping AMD comes back from the holiday with some new ideas in mind!

November 26, 2013 | 04:54 PM - Posted by specopsFI (not verified)

The thing is, it's the same with GK110. Depending on the quality of your chip, you will get different amounts of throttling (which I'm calling the drop from max boost because of temp/power limits). Since no two chip are ever equal, both companies' power balancing technologies will unavoidably result in variance in sustainable clock rates.

True, at least Nvidia isn't selling their cards by shouting out their max boost clocks, but if the review card has a better chip than what one ends up buying, the effect is the same - worse performance than expected. Both companies are selling their cards with reviews and now both companies have a "review boost" technology. It's not cool and it makes objective GPU testing a pain, but I suppose it is what it is from now on.

November 26, 2013 | 10:38 PM - Posted by Ryan Shrout

While you are correct that variance occurs on the GeForce GPUs, in my experience it isn't this extreme.

November 28, 2013 | 03:14 AM - Posted by Steve (not verified)

You just don't test this with hundreds of cards. That is what I'm doing day by day so I can tell you that I found some GTX780Ti that 8% slower then the defined reference speed. In my database the average variance is 5% for the GTX780Ti and 3% for the R9-290X. But this is normal, this is how these card designed to work.

November 28, 2013 | 11:41 AM - Posted by John Hendrick (not verified)

Oddly, I have done similar work and I get exactly the opposite. The 780Ti gets no more than 2% variance (I define variance as the full spread in measured frequency about the mean, divided by the mean) - averaged over 72 cards. On The R9-290X I am seeing well over exactly 13.4% ... this is averaged 56 cards.

The R9-290X is clearly running too hot and run smack dab at the very limit of the junction temperature. This card will likely run much slower in Phoenix Arizona that in does in Quebec Canada.

AMD should dial back the specs on the card, whether this is intentional or not, it is not putting them in a good light.

November 28, 2013 | 12:46 PM - Posted by Anonymous (not verified)

Just another Nvidia sponsored article. The sales are down so will see few of this from sites like PCP that can be bought to repeat the same old story. over and over again.

It's very simple if the ambient room temp are not the same in Quebec or Arizona, just adjust the fan speed. Most enthusiast buying $400 and up graphic card knows that and actually very few bought the card with the intention to use the reference cooler if anybody.

November 29, 2013 | 04:08 AM - Posted by Klimax (not verified)

Facts cannot be bought... dear AMD fanboy who's unable to handle facts.

December 2, 2013 | 04:29 PM - Posted by aparsh335i (not verified)

Can you explain why it is that you have had so many of these video cards in your hands for testing? That's over $50,000 of video cards.

November 28, 2013 | 11:42 PM - Posted by Anonymous (not verified)

I have to commend you on a through and honest evaluation of the ATI graphics card. It's extremely hard to find unbiased reviews anymore. Your ability to translate hi-tech information into easy to understand language is quite high! It's extremely important to me, especially in this economy, to only support major manufactures who go out of their way to show integrity in their product.

Thanks for your work!

November 27, 2013 | 05:28 AM - Posted by Sihastru

That's not really true. Kepler has a base clock for every model, and the cards will NEVER go below the base clock (unless you take the cooler of them, or your PSU is faulty). NVIDIA said from day 1 that some cards will BOOST a bit better then others.

But since they don't set the clocks so close to the limit of the GPU, absolutely all cards will achieve the preset BOOST clocks, actually go well beyond that. Some will go a step higher, but it's a small step and performance is about the same and well within the margin of error for the benchmarking process.

AMD on the other hand says the cards have an "up to 1GHz" clockspeed. The retailers sell the cards as having a "1GHz" clockspeed, loosing the "up to" moniker. And in reality the cards will ALWAYS clock lower then 1GHz, in some cases A LOT LOWER, around 650-750MHz, because they throttle so heavily.

And the fact that it's so easy to find retail cards underperforming review cards, does raise an important aspect. Because it points out to the fact that it's not just a small batch problem. And it also raises the question if AMD sent reviewers HAND PICKED cards that performed better then what consumers will get in retail.

But to summarize, NVIDIA sells you a baseline performance that ALL cards will achieve and that's guaranteed, and maybe your store bought one will be a bit faster if you're lucky, while AMD sells you the dream of a top performing card that, it seems, will NEVER be the case of your store bought card because, well, your unlucky.

And of course they didn't send golden samples to reviewers, because they're AMD and only NVIDIA is capable of bad things. And in case you didn't detect it, this was sarcasm.

November 27, 2013 | 07:13 AM - Posted by specopsFI (not verified)

I agree on the clock rate advertisement aspect, as I already wrote. What I don't wholeheartedly agree on is the extent to which advertised clock rates make a difference for sales. I'd say that for such high-end GPUs, the review performance is way more important than stated clock rates on the box. Both Nvidia and AMD have set up their new cards (GPU Boost 2.0 and PowerTune 2.0) so that they perform better on a short benchmark than on a sustained gaming session.

IMO, the biggest issue those technologies raise is in regards to their customizable nature. Based on user choices, one can get wildly different performance levels for these cards. For example, I think it's unfair to test the 290X on über mode (or even worse, with uncapped fan speed) and leave the GK110 cards on their stock power and temperature limits. One can make the choice to get sustainable full performance of the Hawaii cards by making the fan spin faster to keep them from throttling. One can also get the GK110 cards to sustain their maximum boost clock constantly by upping their power and temp limits. So for a comparison test to get the baseline performance differences for these chips on even terms is a really big challenge.

November 26, 2013 | 05:03 PM - Posted by Rick (not verified)

Ryan

I assume you took the sample card apart for pictures? I am also assuming you reapplied TIM?

Did you do that with the retail card? The stop TIM application on gpus usually isn't good.

November 26, 2013 | 10:38 PM - Posted by Ryan Shrout

I didn't do any of that on any cards we have tested.

November 26, 2013 | 05:07 PM - Posted by collie (not verified)

There are 2 big questions I think are on everybody's mind,it's certainly on mine, 1)are these truly flaws in the core or a simple hiccup on the first iteration, it this the kind of thing that (much like the first bulldozer llano and FX chips) shows promise now but by the time of the r9-390x or whatever the next one will be called, all the problems will be worked out? 2)are we expecting to see this hawaii architecture used in the rest of the next generation amd gpu's, like the r7-340, r9-370x etc.

December 16, 2013 | 07:34 AM - Posted by Dan (not verified)

Unfortunately, cards such as Sapphire VaporX should have been available from day one; not the crappy reference cooler with TIM that looks like it was literally thrown on the GPU ...

Lack of attention to detail and quality indicates that AMD does NOT care about it's customers [and I have been one for years].

November 26, 2013 | 05:07 PM - Posted by Rick (not verified)

Sorry to spam the comments

But what is the ASIC quality on the gpus?

November 26, 2013 | 10:41 PM - Posted by Ryan Shrout

I'm not sure I understand the question?

November 27, 2013 | 05:12 AM - Posted by Sihastru

GPUZ has an ASIC Quality option, but it's not really telling anything about the quality of the chip. And nobody actually knows what it means. I have two identical 680's, they have consecutive serial numbers. One has 70-ish ASIC Quality and one has a 90-ish ASIC Quality and the one with a lower number overclocks a lot better.

November 27, 2013 | 05:57 AM - Posted by Anonymous (not verified)

ASIC quality, AFAICT, as reported by GPU-z, deals mostly with impedance of the circuitry.

There are minor differences in the thickness of the metal layers and completeness of the doping of the silicon as well as other factors that alter resistance, conductivity, and capacitance. Lower resistance generates lower heat (of course) and requires lower voltage to operate. This makes for better overclocking - at least at room temperature... Higher maximum conductivity can mean higher resistance at lower voltages, and capacitance can really be ignored - but the relationship of them together is often referred to as 'voltage leakage.'

Higher resistance, as mentioned, can be a sign that the chip can carry higher current and handle higher voltages. This is great for being able to push the chip to the max when cooling isn't a factor... BUT, if cooling is a factor (which it is in all cases - except LN2 or Helium...), then you want lower resistance - and LOWER ASIC quality...

However, if you are using sub-ambient cooling, you want higher ASIC quality - so you can push more voltage and current and achieve higher clocks.

November 27, 2013 | 06:10 AM - Posted by Anonymous (not verified)

Sorry, when I said You wanted "LOWER ASIC quality" for air and HIGHER for LN2, I got those backwards, LOL!

Higher ASIC quality = lower resistance, thus lower voltage, less heat, higher clocks on air, but less voltage & current tolerance.

November 27, 2013 | 11:07 AM - Posted by General lee (not verified)

This is further complicated by chip binning. For example 7970 has four known bins that have different default voltages, based on their ASIC quality number.

up to 2F90 (up to 75% quality) - 1.1750V
up to 34D0 (up to 80% quality) - 1.1125V
up to 3820 (up to 85% quality) - 1.0500V
up to 3A90 (up to 90% quality) - 1.0250V

Pure ASIC quality number is useless if you don't know the bins, at least for non xtreme oc.

December 1, 2013 | 08:59 AM - Posted by Rick (not verified)

So what are the ASIC numbers?

November 26, 2013 | 05:29 PM - Posted by EndTimeKeeper

Hey Ryan Thanks for all the Great info about all these cards if you have the time you might like to check out two of my videos demonstrating my setup and how my R9 290x (BF4 Limited edition from sapphire its the same one that you are using in your article)is running. Thanks again for all the hard work and info.

Here are the links to my videos sorry for them being kind of ruff I don't make alot of videos lol.
1. https://www.youtube.com/watch?v=VicPlkKdRGk
2. https://www.youtube.com/watch?v=Ht7cDpu_PwE

November 26, 2013 | 06:56 PM - Posted by snook

strange, we can all be sure NV just randomly grabs GPUs off a self/production line and sends them to reviewers ;/

variations are too wide for sure.

November 26, 2013 | 10:42 PM - Posted by Ryan Shrout

I'm sure that NVIDIA does some testing before sending out review samples but its all about how they react to that data.  Do they send out 'average' cards, 'above average' or 'f'ing amazing' cards?

November 26, 2013 | 11:29 PM - Posted by Anonymous (not verified)

I look forward to your retail vs press sample AMD/Nvidia shoot out review.

November 27, 2013 | 03:58 AM - Posted by snook

fing amazing. if not they would be extremely foolish for not doing so.

they aren't saints certainly.

December 2, 2013 | 09:18 AM - Posted by Panta

thats why consumers can never trust reviewers OC results.

November 26, 2013 | 07:38 PM - Posted by Anonymous (not verified)

Since PC Per and other tech sites don't want to be labeled as Nvidia leaning fanboys I'm just gonna call this what it is.

AMD, despite delivering a good product, has apparently pulled some shady shit in order to paint their performance numbers better than what can be expected in retail by delivering hand cherry-picked gpu's for preview and review.

I think Gordon Maung at Max PC brought up a very damning point about all of this.

>>>>>>>>To his knowledge, there are numerous reports about retail cards having variance but NOT A SINGLE REPORT of any variance in any of the handpicked cards shipped for review from AMD.<<<<<<<<

If this turns out to be true, AMD better own up to this crap, fire the people responsible and be whole hardheartedly transparent with expected performance for the average user.

I like AMD, but if this is the way they are going to do business, they can kiss my a..

I know someone at AMD is reading this. Wise up, own up, and be transparent. You have a lot more on the line than you think.

November 27, 2013 | 06:03 AM - Posted by Anonymous (not verified)

I don't believe AMD is doing anything untoward in this case. Many people in the owner's forum have reported that they have much lower throttling than even the review samples.

I just think, if anything, AMD may have been more closely monitoring the first produced cards for production issues, then as production ramped up stopped enough attention - something that usually isn't a problem... a card running 5-10C hotter, but still well within tolerances, isn't a problem on other cards...

I think one solution would be change the TIM they're using and try to weed out the variability in the application better. Others have done this with surprisingly good results. A little higher quality TIM has allowed some to not throttle at all with stock fan settings - but most just gain 50MHz or so on the average.

November 26, 2013 | 08:32 PM - Posted by Anonymous (not verified)

Lets not forget he went on a tyraid about how Uber Mode was invalid because its not a "Default" out of the box setting.

He probably making out with Tom Peterson in exchange for the Nvidia adds on his site.

Use chapstick don't want your lips dry'n up.

November 26, 2013 | 09:52 PM - Posted by Mandrake

I'm certain Ryan is trembling at the thought of more your viscous insults! Are you seriously 12 years old or something? That's pretty weak and pathetic.

Nvidia has the right idea here. They advertise a base clock and a typical boost clock that most cards should be able to attain. I have a factory overclocked 780 and without any action on my part the card regularly exceeds the advertised boost speeds by around 100MHz. This means, to me at least, that Nvidia is being reasonably cautious with the boost clocks they choose to advertise. AMD only advertises the maximum clocks that their cards can achieve. They also need to define a base clock so that people can be certain of at least a certain level of performance.

November 26, 2013 | 11:46 PM - Posted by Anonymous (not verified)

As far as reviewers I only see Ryan bitching about the base clock clarification.

Its simple to find out. Once the card initiates 3D clocks that's your base. Duh!!!

What other reviewers are cry'n like babies like Ryan is about, Oh I don't understand what up to 1GHz means.

At the end of the day its about what FPS it can push not if I can understand that ghz its operating at.

Other reviewers have pointed out AMD issues but It seams Ryan is full board with Nvidia smear campaing

"Nvidia is misrepresenting the issue. Nvidia is pushing this as a GPU frequency problem where users “expect” certain clocks. What users actually expect is certain levels of performance. Offering a switch that puts the GPU in a mode where it trades performance and temperature is not the problem. The problem is failing to offer that both on both cards. The GPU clock speed oscillates back and forth when the R9 290X is in Quiet mode because that’s what the GPU is designed to do."

There are several other review sites that aren't blowing Nvidia horn and are professional enough to explain both sides without whinning about base clocks.

November 26, 2013 | 10:32 PM - Posted by tbone (not verified)

wake me up when there is 3rd party after market coolers.

maybe u should get another retail card and write another article rehashing this "fan gate" issue every week.

November 26, 2013 | 10:45 PM - Posted by Ryan Shrout

I eagerly await the retail cards as well.  But AMD is in fact selling these cards with reference coolers, so the points are still 100% valid.  

November 26, 2013 | 11:08 PM - Posted by Anonymous (not verified)

Futuremark and Unigine should make a 30-minutes anti-boost anti-powertune mode in their benchmarks

November 26, 2013 | 11:13 PM - Posted by hosko

The figure AMD should be advertising is the absolute minimum that there card can produce and an average figure. Any card a consumer receives should be able to beat the minimum, end of story.

The variance in the performance of their chips isn't up to standard.

For the record I just purchased an MSI N770 but have purchased AMD in the past. I was building a HTPC so noise was a major factor considering a have spent a large amount on a surround sound system..

November 27, 2013 | 01:31 AM - Posted by Anonymous (not verified)

ExtremeTech

AMD’s Radeon R9 290 has a problem, but Nvidia’s smear attack is heavy-handed

http://www.extremetech.com/gaming/170542-amds-radeon-r9-290-has-a-proble...

Seams Ryan is repeating Nvidia PR word for word to me.

November 27, 2013 | 02:14 AM - Posted by Simon (not verified)

And I'm left once again scratching my head why, with Nvidia having counterpunched and with the gtx 780 ti released, even with aftermarket cooler versions, AMD still isn't allowing 3rd party models with aftermarket coolers to be sold. Which would effectively solve the damn problem.

I mean, why not get it over with ASAP?

I really wonder...

November 27, 2013 | 03:00 AM - Posted by Brad (not verified)

AMD could have largely dodged all the criticism over the high temps, the noise levels and these frequency variations if the AIB partners had aftermarket cards at or sooner after launch.
Instead they released a product that has these variations, is much louder and hotter and is up to ~10% slower than what it could have been.

Tom's Hardware has shown what a decent cooler can do to the 290 and it'll be a similar story with the 290x. While remaining much quieter than the stock cards the GPU didn't exceed 60C, hence the chip will run at 100% stock frequencies all the time and can be overvolted and overclocked without throttling. They won't be running at "upto 1Ghz" anymore, they will just run "at 1Ghz".

November 27, 2013 | 04:57 AM - Posted by Cynewulf (not verified)

It's all very well showing graphs of clockspeed but without showing the real impact on performance this whole exercise is rather pointless.

Most consumers wouldn't really care (or notice) if their clockspeeds were slightly lower than the maximum advertised. They only care about whether their new card can play the latest games with smooth gameplay.

November 27, 2013 | 05:28 AM - Posted by Mac (not verified)

I agree entirely, the lack of actual performance data here puzzles me
I guess it's all gone a bit quiet on the review front hardware wise, so may I suggest reviewers test tri/quadfire on these 290s (I am yet to see any FCAT data on this)instead of reporting every defective card that's sent to them

November 27, 2013 | 08:24 AM - Posted by Jabbadap (not verified)

I presume that you got mistake on that fan speed graf. Should be rpm:s not MHz on y-scale.

Did you check Asics quality of those cards(can gpu-z even get them from hawaii)?

November 27, 2013 | 08:54 PM - Posted by Death to anti-competitive scum (not verified)

PCper is most assuredly a pro-nvidia shill. He's shown time and time again that he will pick up anything remotely damaging to AMD and run with it.

And for the anonymous comment : "I like AMD, but if this is the way they are going to do business, they can kiss my a.."

If you had such morals, you'd have said that to nvidia years and years ago. Whatever you think AMD has done, multiply that by 100 and that's what nvidia is. A predatory, monopolist anti-competitive a**hole corporation. Whatever technical assets they have, are entirely overshadowed by their unethical business methods. nvidia has been on my sh*tlist for over a decade and that goes double for intel.

Stop supporting these bastard corps that will not compete fairly and ethically. Your money and mindshare is paying for this garbage.

November 27, 2013 | 10:12 PM - Posted by BubadY (not verified)

Yo mates, just stop waiting your nerves with that flames about AMD. You have absolutely to admit that the r9 290 is an amazing CPU chip. Spoken in price/performance. You cannot tell that visit has graphic cards that are as cheap as AMD has. But the reason you are flaming AMD is one where you are right. They absolutely failed in designing that card. The actual cool solution is just crap. No wonder the chip is going down in mhz. My opinion is just to wait for the partner cards and their cooling solutions and I can promise everyone you won't get disappointed in this card.

greets!

November 28, 2013 | 01:42 AM - Posted by wujj123456

Looks like meaningful discussion is only happening here. So I ported my thread from http://www.pcper.com/news/Graphics-Cards/Controversy-continues-erupt-ove... to here.

=======
Well, I am just wondering why put so much pressure on AMD? Look at Nvidia too, please.

Yes, Nvidia only promises base/boost speed, but not the upper bound. However, remember when you do your benchmark, you are effectively running at whatever actual speed your card provides. Let's call it "real" frequency. Your review is based on this "real" frequency. It will stabilize at some point after warming for certain application. It's same as AMD's card, and the only difference is how they market their speed.

Now if you want to argue that AMD is a bit shady or inconsistent about "real" frequency, take a look at what I have on hand.

Card 0: EVGA GTX780 SC (941/993MHz, 69.4%)
Card 1: EVGA GTX780 non-SC (863/902MHz, 70.5%)
Card 2: EVGA GTX780 non-SC (863/902MHz, 63.7%)
All from newegg, marked with (base/boots, ASIC quality)

At stock settings, running Heaven benchmark for several minutes, the "real" freqs are stuck at 862/862/1019-1045, with fan speed 51%/60%/63%, and temperature 79/83/84. I picked Heaven because it's easier to run, and I've observed similar results across games.

What do you see?
1) The non-SC versions are effectively running at their base clock while SC version are running at much higher than boost under all stock settings?
2) All review sites I saw all stated that GTX780 stay at 80C, with some boost while running games. This is definitely not my case. The regular GTX780 has to hit 83/84C to maintain base clock with stock fan profile.

Now, let me change my perspective. Sure I don't want SLI to run at different speeds, and introduce shuttering and all other crap. So how much overclock do I need to make them equal? I unlock fan speed with my own curve, and thermal to 106%/85C. Also OC memory +300MHz. Here is the result:
Card 0: +0MHz, 79% fan, 67C, 1071MHz
Card 1: +79MHz, 97% fan, 78C, 1071MHz
Card 2: +129MHz, 97% fan, 78C, 1071MHz

I need to OC 79MHz for card 1 and 129MHz for card 2 so that all three cards reach same 1071MHz "real" freq during gaming. That's consistent across all games I've played so far, including Dirt 3, Bioshock Infinite, BF4, FC3, Heaven benchmark, 3DMarks, etc.

Again you can argue that overclocking tweaks the characteristics a bit, but Nvidia and AMD have different architectures, and running at 1GHz for R9 might as well be similar points where GTX780 are at 1071MHz, on same process. If not convinced, just compare my stock setting data with what you guys get during your review.

The variation is definitely there for Nvidia, and it's equally HUGE, if not bigger. Especially if you consider the OC version is just a fact of binning. Sure they have different BIOS, but when overclocking to equal grounds, the difference between cards show up immediately. Nvidia might as well shipped you a better bin for review, or at least filter the worst ones.

If you have a problem with AMD stating max frequency instead of base frequency, which might mislead consumers, fair. However, if your point is inconsistency on retail vs review samples, make sure you check Nvidia too. It's no better from my limited sample. The fact they stated base/boost speed doesn't make them immune to hand-pick review sample for you guys.

I am a huge Nvidia fan, but at this point, I do think you are biased on this matter, unless you guys do a similar study on Nvidia cards. Similar sample size, warm them up, and check on actual speed during benchmarks.

This is my data and thoughts. Now please show yours. You have much more resource than me after all.

Edit: My cards are all bought on early June, so must be their old stepping instead of the new one. GPU-z says revision A1.

November 28, 2013 | 04:11 AM - Posted by trey long (not verified)

AMD always has quality issues in their flagship. Crossfire, weird anomalies in retail cards, noise, heat, power draw all way too much, and when they get caught with their pants down, it takes forever for them to fix it. Pros buy Nvidia 80% of the time because they have to. They really don't have choice.

November 28, 2013 | 05:22 AM - Posted by Ryan D. (not verified)

True that!

November 29, 2013 | 09:14 AM - Posted by Cynewulf (not verified)

Well thanks for that bout of pointless speculation and hearsay. Well done.

November 29, 2013 | 09:14 AM - Posted by Cynewulf (not verified)

Well thanks for that bout of pointless speculation and hearsay. Well done.

December 5, 2013 | 03:27 PM - Posted by Remon (not verified)

Those are the same issues Nvidia had with past cards.

November 28, 2013 | 05:23 AM - Posted by Ryan D. (not verified)

True that!

November 28, 2013 | 01:06 PM - Posted by BaSiLLiSKoS

I have one explanation: Ryan Shrout = Fanboy

Its obvious.

November 28, 2013 | 05:10 PM - Posted by Jim (not verified)

^ Troll, ignore.

November 28, 2013 | 05:10 PM - Posted by Jim (not verified)

^ Troll, ignore.

November 28, 2013 | 05:11 PM - Posted by Jim (not verified)

^ TROLL, IGNORE

December 1, 2013 | 09:38 AM - Posted by GPU (not verified)

Ryan, if you check out Legit's article, you'll see the problem seems to just be caused by the BIOS as when LR flashed the review card's BIOS onto a shop-bought one, they performed identically.

http://www.legitreviews.com/amd-radeon-r9-290x-press-sample-versus-retai...

Any chance of you trying this and updating your article with the results?

December 1, 2013 | 09:37 PM - Posted by Anonymous (not verified)

i'm tired of reading about the fanboys on both sides of this issue in the comments. my god people grow up for you act like 10 year olds. for one stop posting Anonymously and use a name if you wish to bash one side or other. my guess is your to chicken shit to do so, please prove me wrong then. the key is to do your research by looking at many diff sites and coming to a your own conclusion on what works best for YOU. Screw off you fanboys on both sides. your just showing that your stuck at the age 0f 10 and can not grow up and have an adult discussion. thanks for all reviews, ignore the fanboy asses, and keep up the good work for all of us.
the 6er

December 2, 2013 | 11:45 AM - Posted by Ryan D. (not verified)

Says the anonymous guy.....hypocrite.

December 3, 2013 | 05:03 PM - Posted by Buildzoid (not verified)

What voltages were the cards at because it could be that sapphire was being generous and gave their card 1.2V as stock voltage which is much too high for 1Ghz on GCN 1.15V is enough for my terrible(1.285V gives 1190mhz)7970 to do 1150mhz. so this should only need 1.1 to get 1Ghz.

December 3, 2013 | 06:01 PM - Posted by Nightowl224 (not verified)

It's not rocket science to say what's going to fix these cards. It's going to take a series of bios updates along with new coolers to get these cards fixed, I personally saw a lot of issues with the 7000 series cards since I got the 7850's earlier this year.

Case and point. I had issues with my 2GB ASUS Radeon 7850's in various games, including Diablo 3, Mass Effect 3, and Skyrim, until I flashed them one by one to the GOP UEFI BIOS from ASUS, and that fix a lot of problems, including the driver crashing when alt tabbing out of games. I'm assuming that ASUS got the bios from AMD, and then ASUS then fine tuned it for their cards.

I don't hate either company, but I have seen products from both companies that were rushed out the door before they were truly ready, all in the name of profit and worshipping the God Of Money, Payola!!!

I have yet to see a review of any of the ASUS R200 series cards anywhere on the internet most of the mid range cards, except for the highend cards have dual fan HSF's on them, it would be nice to see a review of midrange ASUS R200 series cards.

December 5, 2013 | 05:45 PM - Posted by MikeB (not verified)

As much as dyed in the wool ATI fans would like to attempt to deny the poor QC from ATI add-in-board partners, mixed with half-baked, or completely ill-thought engineering descisions, in both hardware and software, from ATI itself -- I think anyone rational, without skin in the game, or a purchase dragging at their mind, has noticed a pattern in terms of things like this.

I myself made the mistake of purchasing 3x 5870s, when they were widely lauded as the best card at the time, as having incredible multi-gpu scaling, quiet performance, and good build quality.

Perhaps this was true in the 4XXX series, but it certainly wasn't true of the 5870 family of cards. Constant grey screens at 2560x1600, awful driver problems, terrible multi-gpu scaling, with, at that point in time, NO OPTION to force AFR behaviors on a per application basis. What a joke those cards were. Even today they have issues with hardware accelerated flash. They performed so poorly I GAVE THEM AWAY.

Since then, I've had 580 sli, and a GTX 780. Each laptop I've purchased has been Nvidia, and any card I've bought as a present will be Nvidia. Fact of the matter is, when we stop hearing stories like this, I may believe ATI has turned around. But it is JUST like them to sweep a problem like this under the rug right after release, and persist in doing so for months, before finally copping to the idea that there is a problem, and slowly working to fix it, before giving up, like they did on the 5870 class of card.

AMD needs to show and tell, show and prove, not tell and talk, and say and tell, like is happening now.

December 8, 2013 | 08:46 PM - Posted by snook

good, we even it out. because after Tom (Peterson's)? condescending half-truth regurgitated bile "we may in the future" to Ryan's "will Gsync be made available as cross platform" question. I will never buy another Nvidia product.

The answer was and is "no". had he said; nope, let them make their own! I could have respected that...they made it after all.

nite nite

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.