Review Index:

AMD Radeon RX 480 Power Consumption Concerns Fixed with 16.7.1 Driver

Author: Ryan Shrout
Manufacturer: AMD

Rise of the Tomb Raider and The Witcher 3 at 1080p

Power Testing Results - Rise of the Tomb Raider at 1080p

If Metro: Last Light at 4K was our worst case scenario, I wanted to look at a couple of “normal” cases, the same ones we measured in our initial power story. Starting with Rise of the Tomb Raider at 1080p and the High image quality preset, how do the new 16.7.1 driver changes affect it?

View Full Size

Rise of the Tomb Raider (1080p) power draw, RX 480, 16.6.2 driver, Click to Enlarge

View Full Size

Rise of the Tomb Raider (1080p) power draw, RX 480, 16.7.1 driver, Click to Enlarge

Without enabling the Compatibility mode switch in the driver, power consumption changes look very similar to what we saw in Metro at 4K. The total power consumption is actually a few watts higher with the new driver, hitting 157 watts on average compared to 154 watts with 16.6.2, but the big change is in the source of that power. The PEG slot drops average power draw down to 66-67 watts while the 6-pin picks up the slack and jumps to nearly 90 watts. That new separation between the white and blue lines of power draw is what you should be paying attention to – that demonstrates the new weighting of power phases to the GPU.

View Full Size

Rise of the Tomb Raider (1080p) current draw, RX 480, 16.7.1 driver, Click to Enlarge

Looking at the current ratings for each power source with the new 16.7.1 driver we clearly see that the PEG slot drops, pulling the same 5.75A in Rise of the Tomb Raider as we saw in Metro: Last Light at 4K. This is significantly closer to the rated 5.5A mark.

What happens if we enable that Compatibility Mode switch?

View Full Size

Rise of the Tomb Raider (1080p) power draw, RX 480, 16.7.1 driver (Compatibility ON), Click to Enlarge

Flipping that switch for compatibility mode to ON we see an overall drop in power consumption; we are now under 150 watts on average! This also marks the first time we see the power draw from the PCI Express slot under 65 watts. 6-pin power consumption is still higher than 75 watts (hitting 84-85 watts) but is still a much more reasonable level consider the buildout of the hardware.

View Full Size

Rise of the Tomb Raider (1080p) current draw, RX 480, 16.7.1 driver (Compatibility ON), Click to Enlarge

Current draw with this mode enabled on the PEG slot +12V line is now under 5.5A (!!) for the first time. The 6-pin connection is pulling over 7A, though with each of the two +12V connections rated at 8A (physical connection rating, not PCI Express rating), I feel more than comfortable with that swap.

Rise of the Tomb Raider (1080p) 16.6.2 16.7.1 16.7.1 (Compat ON)
MB Slot (PEG) Power 72 W 66 W 63 W
MB Slot (PEG) Current 6.2 A 5.7 A 5.4 A
6-pin Power 76 W 89 W 84 W
6-pin Current 6.4 A 7.6 A 7.0 A

Power Testing Results - The Witcher 3 at 1080p

My final data set for power consumption today comes from The Witcher 3, still at a more standard resolution of 1080p. Image quality setting are set to Ultra (with Hair Works disabled throughout).

View Full Size

The Witcher 3 (1080p) power draw, RX 480, 16.6.2 driver, Click to Enlarge

View Full Size

The Witcher 3 (1080p) power draw, RX 480, 16.7.1 driver, Click to Enlarge

As I saw with the Rise results above, I see just a few watts more power consumption with the new driver in TOTAL power draw; up to 160 watts from 155 watts. But the split of power draw from the two different sources continues to be improved, with the PEG slot pulling less than 70 watts on the +12V line. The 6-pin connection increases its power draw to 90+ watts.

View Full Size

The Witcher 3 (1080p) current draw, RX 480, 16.7.1 driver, Click to Enlarge

Current draw in The Witcher 3 looks very similar to Rise of the Tomb Raider as well: the PCI Express slot is now drawing 6A.

What happens if we enable that Compatibility Mode switch?

View Full Size

The Witcher 3 (1080p) power draw, RX 480, 16.7.1 driver (Compatibility ON), Click to Enlarge

With the compatibility mode enabled we see total power draw in The Witcher 3 fall to 152 watts total and 64 watts from the PCI Express slot itself. This marks another case where the power draw is now under the recommended limit from the PCI-SIG.

View Full Size

The Witcher 3 (1080p) current draw, RX 480, 16.7.1 driver (Compatibility ON), Click to Enlarge

Finally, looking at the added benefit of the compatibility mode on current draw, the PCI Express slot is pulling right at 5.5A, with the 6-pin connection hitting 7.2A or so.

The Witcher 3 (1080p) 16.6.2 16.7.1 16.7.1 (Compat ON)
MB Slot (PEG) Power 74 W 68 W 64 W
MB Slot (PEG) Current 6.5 A 5.9 A 5.6 A
6-pin Power 76 W 90 W 85 W
6-pin Current 6.4 A 7.7 A 7.2 A

Video News

July 7, 2016 | 03:01 PM - Posted by mrgreen (not verified)

i've 1080 ordered.

July 7, 2016 | 03:17 PM - Posted by looniam (not verified)

if your budget can accommodate a 1080 it's pointless to consider a card ~25%-30% the price anyhow.

July 7, 2016 | 05:50 PM - Posted by Anonymous (not verified)

Are we sure this is a driver review or is it republicans foaming at Hillary's emails?

July 7, 2016 | 03:20 PM - Posted by Anonymous (not verified)

Well, Good for you! Now go away.

July 7, 2016 | 03:27 PM - Posted by Anonymous (not verified)

I just had a pizza, it was delicious.
It wasn't made by nVidia, so that must piss you off.

July 7, 2016 | 03:51 PM - Posted by Behrouz (not verified)

Then you're Ignorant to come here!
this article is about card with price $200 but you bought card with price above $600 yet you came here and said : Thanks , I bought it ?

July 7, 2016 | 08:36 PM - Posted by Odin (not verified)

Wow $600 for 1080, we get to pay $1100-1300 in Australia

July 8, 2016 | 01:38 PM - Posted by Anonymous (not verified)

700 US Dollar equals 925.83 Australian Dollar the rest is VAT I suppose so it's not really surprising of the cost right?

July 7, 2016 | 10:12 PM - Posted by Anonymous (not verified)

July 7, 2016 | 10:21 PM - Posted by MrMe (not verified)

You are ignorant to claim that a RX480 8GB can be found anywhere for $200.

July 7, 2016 | 10:54 PM - Posted by Robojeeves

Although you are technically correct, many people have already gotten 480 8GB models for $200. First, best buy sold them at the wrong price by mistake, then it was discovered that all of the launch 4gb cards were 8gb cards with a 4GB sticker slapped on. They rest of the memory can be unlocked by flashing the vbios.

July 9, 2016 | 02:05 AM - Posted by Anonymous (not verified)

You are also foolish to think that any GTX 1080 is going to sell for $699 or even $799, almost all GTX 1080 I had seen selling on ebay goes for $899. It goes both ways son.

July 7, 2016 | 04:28 PM - Posted by Anonymous (not verified)

Damn dude, why so many?

July 7, 2016 | 04:29 PM - Posted by Anonymous (not verified)

So you live in a Rusting double-wide and you have forced your Swife to work an extra shift for two weeks at the sugar shack to pay for Nvidia's overpriced 1080 FE gimped Kit! Your teeth are even green to match Nvidia's colors, have you ever used tooth paste!

July 7, 2016 | 05:10 PM - Posted by Anonymous (not verified)

Could be anedia troll. I do not believe what an anedia fan writes.

July 8, 2016 | 12:34 AM - Posted by Anonymous (not verified)


July 8, 2016 | 11:30 PM - Posted by Anonymous (not verified)

I ordered 2x RX 480`s on release day and was not bothered by a bit of extra power, but am happy the power is even lower now.

July 9, 2016 | 04:09 PM - Posted by RSavage (not verified)

there are at least two power efficiency features they havent implemented it yet. „boot time power supply calibration“ and „anti aging compensation“, so power efficiency should get improved even more by then.

July 9, 2016 | 02:45 AM - Posted by aparsh335i (not verified)

I also ordered a 1080 after the fiasco. Read a lot of real people's post that actually fried their motherboards. I was 100% ready to support AMD with 2x RX480's but not worth messing with an unfinished product.

July 9, 2016 | 02:41 PM - Posted by Anonymous (not verified)

Are you st oopid? How you were considering the RX 480 and suddenly considered the GTX 1080? Both cards aren't even on the same price/performance bracket. Plus why bashing AMD with the "unfinished product" when the GTX 1080 is a paper launch that lacks of Async support, it does not have preemptive pixel enabled, it also lack of Mixed resolution support and no availability?

July 9, 2016 | 04:11 PM - Posted by RSavage (not verified)

Yeah, but its normal for nvidia to present features they implement months after, but for AMD its reason to not buy their product. „boot time power supply calibration“ and „anti aging compensation“ should be implemented in next WHQL though.

July 9, 2016 | 05:31 PM - Posted by jabbadap (not verified)

S/He said 2*RX 480. When CF scales the performance is quite near to gtx1080(Of course it does not scale so often):

July 7, 2016 | 03:18 PM - Posted by Pholostan

So is this storm in the teacup finally over?

I guess this is what happens if you very much want to only have one 6-pin to get into all those OEM contracts. Seems like it has worked, Dell sells the 480. Add to this that you want to release early on a new process that Global Foundries are supposed to deliver on. I think there is notable differences power-wise between the early batch of cards and the batches that have arrived just weeks later. But that would mean testing many cards from the same batch and compare them, not really practical.

July 7, 2016 | 03:19 PM - Posted by remc86007

I think the most noteworthy thing is that the card gets the same performance on the new driver using less power under compatibility mode as the old one. I love how AMD responded to this. Imagine if they had a similar R&D and driver development budget as Nvidia...

July 7, 2016 | 03:49 PM - Posted by Anonymous (not verified)

They had to bump the perf by 3% to offset the drop in performance due to the power change.Performance that would have gone to the card anyway.. So really, it is worse..

If they had the same as nVidia they wouldn't have fudged that part to start with and it might have 15% better performance (gtx 1060 rumor) so that it could actually compete.

worse is subjective depending on how you look at it.

July 7, 2016 | 05:05 PM - Posted by Anonymous (not verified)

Your glass looks half empty, can I offer you a refill.

July 7, 2016 | 05:35 PM - Posted by Anonymous (not verified)

Are you a complete numbskull or just have poor reading skills? The card does not lose performance with the fix. The power is rebalanced so that more power is drawn from the 6-pin than from th epci-e slot. The card is still using the same power as before when compatibility mode is OFF and performance is also slightly increased.

July 7, 2016 | 10:24 PM - Posted by MrMe (not verified)

He was talking about the dynamic OC potential (boost), which this card just sacrificed, since it need to always be "boosted" to meet minimum advertised specs.

July 7, 2016 | 03:45 PM - Posted by '47 Knucklehead (not verified)

Good to know. It still exceeds specs unless you turn on the compatibility mode (and even then, still sometimes), but it damn sure is a lot better than before.

This was just an example of shoddy quality control/testing and never should have been allowed to ship. This, coupled with the "some have 8GB and disable 4GB of it" just shows you that AMD was rushed to get product out the door and sacrificed quality control/testing in an attempt to scoop nVidia.

No wonder AMD is $2.2 billion in debt and hasn't shown a profit in the past 12 straight quarters. They have serious management issues, and it shows.

July 7, 2016 | 03:55 PM - Posted by Anonymous (not verified)

Not really bad management, nvidia released early so they had to in order to pull in any amount of revenue that would matter. 1060's have been on the way, they're just stock piling so that they will have a notable quantity when release day hits.

You would think the engineers would have considered the "cut down" approach that nvidia takes a bit more seriously (1 board, lots of cut downs) to mitigate costs. Doesn't seem like something they're interested in; however, they sure do now know how to break QC and pass the "savings" onto the customer lol

July 7, 2016 | 04:12 PM - Posted by Anonymous (not verified)

Could you be any more of a paid Nvidia FUDster! The 4/8 was for the reviewers, but some folks got a little Easter egg memory gift out of that 4/8, and at least it's not the 3.5 = 4 Nvidia nonsense.

Let's thank AMD for designing their GPUs for more than the gaming GITs in mind, so AMD's GPUs will perform better for computational uses and other graphics uses. Nvidia's problems will become more apparent as the entire gaming industry switches over to using DX12 and Vulkan and especially for the VR gaming, where AMD's Asynchronous Compute fully in the GPU's hardware will result in much lower VR gaming related latency. Nvidia will not be able to code their way out of having a lack of fully in the GPU's hardware Asynchronous compute and processor thread scheduling and dispatch like AMD has in their GCN and GCN/Polaris GPU micro-architecture.

July 8, 2016 | 12:39 AM - Posted by Mingyao Liu (not verified)

AMD has been doing that for a long time, for example selling 290 that can be unlocked to 290x at launch to capture more market share, it's nothing unusual...
I really thing software will win nvidia the graphic card market... their new tech is very interesting...

July 8, 2016 | 03:39 AM - Posted by Anonymous (not verified)

"AMD's Asynchronous Compute fully in the GPU's hardware will result in much lower VR gaming related latency."

Async compute has absolutely nothing to do with latency for VR. The key feature for VR is pre-emption for injection Timewarp just before buffer readout, and this is ALREADY achieved on older cards from both vendors.

This is obvious if you actually know what Async Compute is: the ability to run compute operations in parallel with graphical operations. For VR, you are performing two graphical operations in series: rendering an image, then warping an image You cannot warp the image before you've rendered it, so to parallelism gains there. You cannot 'pre-warp' a preceding frame to be ready 'just in case', because the whole POINT of timewarp is to sample the IMU IMMEDIATELY before warp for minimum possible latency.

July 11, 2016 | 02:19 PM - Posted by Felipe (not verified)

Bro I live in brazil, and 1070 is 2200 BRLS at the same time AMD e prices are even more "taxed" because people that "import" AMD want even more profit from costumers... like RX 480 is 1450 BRLS...

Which card do you think is best for gaming in 1080p? (most mmo's)

my spec: i5 4690k, ga-h97m-d3h - 1tb hdd WD

And why should a pick a RX 480 card , if i just want to play dx11 games mostly, because most mmo's will not support dx12 for a long time...

thank you bro

July 7, 2016 | 04:20 PM - Posted by Anonymous (not verified)

Exceeding the 6pin power spec is much safer than exceeding the PCIe power spec. Several orders of magnitude safer.

July 8, 2016 | 05:56 AM - Posted by Anonymous (not verified)

Finally some1 recognizes that the 6pin power connector is the way to go if you need to draw more power than the specification. Hell, I've seen so many rigs with 6pin-8pin adaptors, and they run perfectly fine. And that is up to double the spec of the 6pin connector!

July 8, 2016 | 08:17 PM - Posted by Anonymous (not verified)

6-pin PCI-E power connectors are ridiculously under-spec'd. 75W is a crazy-conservative spec for that connector. Look at it this way:

the 6-pin connector has three yellow wires and three black wires. The center yellow wire is not powered (or is not supposed to be). The center black wire is not a ground, it is a "sense" - it IS a ground wire on the PSU side of the connector, but it doesn't carry any operating current, it's only there so the card can confirm that it has a 6-pin power connector plugged in. This leaves two yellow "+12V" wires and two black "Ground" wires.

The 8-pin connector has the same six wires with the same arrangement. It adds two black wires on the PSU side, and they're both ground wires. On the GPU connector, one pin is a second "sense" pin - when that pin is grounded, the card knows that an 8-pin power connector is plugged in - which allows the card to draw power from the middle yellow "+12V" wire (the one that is not powered in a 6-pin connector). The other additional pin is a third "Ground" connection to accommodate the extra current provided by the third "+12V" wire.

So the 8-pin connector is rated at 150W, using three +12V wires and three Ground wires. The 6-pin connector has two +12V wires and two Ground wires. If each wire is capable of at least 50W, shouldn't the 6-pin be capable of 100W at least over its two wires?

July 8, 2016 | 11:28 AM - Posted by Anonymous (not verified)

So, you wanted to see the 3.5GB + .5GB under the sticker. Hmmm.. True Diehard NV Idiot, I see.

July 7, 2016 | 03:46 PM - Posted by NOT A ROBOT (not verified)

I think it is misleading for AMD to put the PCIe power fix and game/Polaris driver improvements in the same driver update. I would like to see performance results that aren't effected by the "optimizations designed to improve the performance". Take a workload that isn't effected by the Polaris "improvements" and see how the PCIe "fix" really effects performance.

July 7, 2016 | 04:58 PM - Posted by Anonymous (not verified)

Really? It's misleading to include performance tweaks in a graphics driver update, just like nearly every single driver update ever in the history of graphics drivers?

July 7, 2016 | 05:06 PM - Posted by Anonymous (not verified)

What? You can get that...just don't use compatibility mode and you've got the 3% performance increase (plus power draw on the PCI slot reduced significantly). Want lowest power draw...keep the same performance at launch and flip the switch.

July 8, 2016 | 08:26 PM - Posted by Anonymous (not verified)

Why do I get the feeling that if Nvidia were to implement a hardware fix in a driver update (say, for example, Maxwell's tendency to idle at a really high GPU frequency when powering a 144Hz monitor, driving up idle power consumption by 70W) and they included other "improvements" and "optimizations designed to improve the performance" in that driver, you would not only be completely fine with it, but you would in fact argue the point with someone else who made the exact same comment you just made?

July 7, 2016 | 03:54 PM - Posted by Common-Sense (not verified)

Why not get a flir camera and check temps at the PCI-E connector instead of going crazy about how the card is still too high on current draw. OMG!! It's still higher than 5.5 amps!! (sarcasm) If the connector fingers are within temp to not melt the plastic housing then I see no cause for concern....

Also, all the power circuitry for the PCI-E connectors are connected PARALLEL to each other, so if more than 70 watts was going to destroy the motherboard, then you could never have more than one video card on the PCI-E buss...

July 7, 2016 | 05:02 PM - Posted by Ryan Shrout


You aren't worried about melting plastic. Its about burning up pins.

VOLTAGE goes through connectors in parallel, not current/power. Only the underlying copper plane deals with the combined power/current.

July 7, 2016 | 06:26 PM - Posted by Anonymous (not verified)

Actually Ryan. The melting of PVC can have severe long term health issues, including but not limited to hormonal disorders.

PVC will start emitting fumes at above 70c.

That being said. I will provide you with a thermal image of a R9 295x2 with a 7% overclock to its core. Running Unigine Valley.

The connectors are fine even at 490w power draw on 2x 8pins. Which is about 160w out of spec (R9 295x2 only draws about 30w or so from PEG according to reviews)

July 7, 2016 | 09:41 PM - Posted by Common-Sense (not verified)

This is exactly what Ryan should have tested since he made such a big deal about exceeding the 5.5 amps. It's the heat build up you have to worry about for any place where a connector is used because it is the weak points due to higher resistance at those points, thus you get higher I^2R losses as current draw goes up. If there is no major heat build up that can damage the plastic then there is no issue with pulling more current than spec allows.

July 7, 2016 | 11:16 PM - Posted by Anonymous (not verified)

There's PVC in computer hardware components? O.O

July 8, 2016 | 08:33 PM - Posted by Anonymous (not verified)

Sure, why not? It's cheap, easy to work with, does a very good job as an insulator at operating temperatures below 70°C, and in this particular use case (computer hardware components), they're (supposed to be) being applied in an environment that will pretty much never come anywhere close to temperatures at which PVC may start to break down. (Except in circumstances catastrophic enough that whether or not PVC is involved is probably the least of your worries.) XD

That being said, I wouldn't doubt it for a second if standard Molex connectors were made of PVC or based on it. But I doubt PCI-E slots would be. They're probably closer to a high-temp thermoplastic like Rynite or something.

July 8, 2016 | 01:07 AM - Posted by NamelessTed

The 6-pin connector was never the issue. Ryan has said multiple times that its less of an issue if the 6-pin was overdrawing as the connector can more than handle it, and the power is coming from PSU. The important part is the amperage coming over the pins in the PCI-E connector on the motherboard.

July 7, 2016 | 08:59 PM - Posted by Sid Vicious (not verified)

What pins are you talking about? Pins that run through the connector into the motherboard? Those touch plastic...which would melt first.

In watching all of this and I have to wonder, where is the control PCBA? Have you measured a video capture card? A Nvidia GTX 970/980? AMD Fury? Why are we not comparing these measurements to different cards? It really does seem blown out of proportion to me, but that's just me.

July 7, 2016 | 09:54 PM - Posted by Common-Sense (not verified)

Burning pins indicates higher I^2R losses due to heat which is from increased higher current draw, which means it WILL melt plastic. However, a one or two amp increase is not going to generate enough heat to do that.

You need way more.

What? So you are trying to say that if I parallel a few devices onto one wire branch that the current will not flow through this branch? HUH?? The + source path ties back together on the motherboard and the ground paths all tie to the ground plane. It branches out to the individual PCI-E slots, but the total current still goes through the main source path. So, like I said, if damage was going to occur to the motherboard then putting multiple cards into each slot would have done that already.

The current limit of 5.5 amps is a PER SLOT basis. It's the slot pins that are the weak point...

July 9, 2016 | 10:17 PM - Posted by Anonymous (not verified)

don't worry about copper burning. The plastic will be melting first.

July 7, 2016 | 04:14 PM - Posted by wcg

Let's give AMD credit for sorting this out quickly. GG AMD.

July 7, 2016 | 05:30 PM - Posted by Anonymous (not verified)

i'm sure Ryan nvidia shill will find something lese to cry about.

July 8, 2016 | 01:09 AM - Posted by NamelessTed

Did you not even read the article?

In Ryan's conclusion:
"But I do believe that AMD has done it’s best to address the power consumption concerns without a hit to performance, getting the RX 480 to a much more reasonable power situation. I no longer believe that consumers should be worried about the stability of their PCs running the RX 480 with the 16.7.1 driver installed."

July 8, 2016 | 03:08 AM - Posted by JohnGR

AMD did a mistake, Ryan was happy to make a full analysis. Now it is time to cover up. No reason to burn all the bridges to AMD.

But you are not going to see this site making a full analysis of Nvidia's problems. Only linking to other sites in case the problem is already known.

July 8, 2016 | 12:10 PM - Posted by Allyn Malventano

Seriously? That's what you come in here with?

July 8, 2016 | 12:53 PM - Posted by JohnGR

You where really happy in the previous article when I was saying that AMD messed up. Now you have a problem?

So tell me. When was the last time you gone with a multi page analysis of problems on an Nvidia card? And have you updated that analysis?

No analysis for the fan problems on 1080 cards.

No analysis for the DVI problem on Pascal.

No analysis for the Vive problems on Pascal.,32...

No recent analysis for the high idle power problems. You never continued that old article because Nvidia never fixed that and the problem appears also to Pascal cards.

No analysis for the DPC latency and stuttering problems on Pascal.

There where also people talking about throttling problems on Pascal. No analysis here either.

So, where are your analysis about the above? You only just barely mentioned the fan issue and the Vive problem. You never really spend time on them. You avoid it. I checked ALL the articles in the last pages of the Graphics Card section. Nothing. Nothing negative for Nvidia. NOTHING.

I really think Raja did a mistake giving you an exclusive for RX 480. You will never be objective. He should stop trying.

July 9, 2016 | 03:13 AM - Posted by aparsh335i (not verified)

None of the things you mentioned above are important. The RX480 problem was important. The GTX 970 3.5+.5gb thing was important, and PCPER covered it. Get over yourself, these guys at pcper are enthusiasts, not fanboys.

July 9, 2016 | 09:10 AM - Posted by Allyn Malventano

I agreed with you in the previous article because you were able to make a good comment that contributed to the discussion. If we looked at every little bit picky thing that came up with every card, there would be a crap load of posts about AMD as well, which we can't do because of folks like you, so we stick to the big issues.

July 9, 2016 | 12:33 PM - Posted by JohnGR

Good comments are not the comments that you agree with them. You are missing the point of the comments section and the whole reason of the forums' existence. And you never miss little bit picky things about AMD cards. There is always an analysis about those.

July 8, 2016 | 01:53 PM - Posted by JohnGR

PS The excitement on your face in a previous webcast, when Ryan was saying that GTX 1060 is coming and doesn't leave much time for RX 480 to play alone in the market, was priceless. :p

July 9, 2016 | 09:04 AM - Posted by Allyn Malventano

Yes, we are all so shocked that enthusiasts are excited about new hardware. How about you look less than a minute from that point where I was assuming the 1060 would come out to higher cost/perf than the 480.

July 9, 2016 | 11:51 AM - Posted by JohnGR

It's Nvidia. It's like predicting that the sun will rise from the east.

Anyway, the whole mess with RX 480 was a good opportunity for sites to show to the people some examples of cards that will be using much more power from the pcie bus or/and the pcie power connector under overclocking. That opportunity has come and gone. People will keep overclocking their cards thinking that the only thing they should care and worry about are temps.

July 9, 2016 | 03:51 PM - Posted by jabbadap (not verified)

Well yes it would be interesting to see I agree, does pcper have some non 6-pin pcie connector gtx750ti or gtx950 laying around(or radeon R7-250). Check the power with OC with tdp slider to max. If I remember correctly at least with gtx750ti bios has it restricted to 75W. Don't know about gtx950 though they are not designed by nvidia itself, but same biosses they use and same kind of restrictions there are(Hint. you need to hard mod your card or edit your bios to bypass bios power restrictions).

I would not really be that concerned about pcie power connectors though. Although 6-pin connector is rated to 75W, it can quite safely pull power more than double. So if you have to go out of spec is best to do by those rather than pcie slot.

July 9, 2016 | 05:51 PM - Posted by JohnGR

750Ti is a 60W TDP card. It shouldn't have problems, even after overclocking it. GTX 950 on the other hand is probably at it's limits at defaults. Overclock it and you got an RX 480 in your slot.

I wouldn't be using GTX 950 as an example, if W1z at TechPowerUp, wasn't getting 20% extra performance after overclocking it. That 20% extra performance for me is an indication that the card is NOT power limited. Until now, I get plenty of insults, but no one said that I am wrong in that assumption.

6pin is considering more safe, but what happens when you use a 600W PSU that costs 25$ and it looks like using a design targeting Athlon XP systems?

July 9, 2016 | 06:53 PM - Posted by jabbadap (not verified)

Well you can check the bios limits by yourself(I don't have windows machine near me right now, and bios tweaker did not work with wine):
Asus GTX950 no 6-pin bios
Maxwell II BIOS Tweaker

July 12, 2016 | 04:18 PM - Posted by yipeekaiyay (not verified)

For somebody who thinks Ryan Shrout is an Nvidia shill, you tend to visit his site and youtube channel a whole lot.
I'm sure he appreciates all the hits and views he has been getting from you...keep on keepin on.

July 8, 2016 | 03:22 AM - Posted by donut (not verified)

Common man Pcper, Tomshardware and others pointed out the problem and AMD fixed it in a short amount of time.

I think all parties deserve credit here.
It shouldn't have happened in the first place but shit happens.

July 8, 2016 | 10:52 AM - Posted by Batismul (not verified)

Yes definitely , GG AMD. Gonna wait on a view full reviews of the GTX 1060 before a few of my friends will purchase.

July 7, 2016 | 04:29 PM - Posted by DaveSimonH

At least it was addressable via a driver update. Would have been a massive, expensive blunder if it had required hardware revision and recalls.

July 7, 2016 | 04:38 PM - Posted by Anonymous (not verified)

Addressable it was, not like that 3.5 and 0.5 memory fiasco that the Nvidia users got shafted with. I wonder how that GTX 370 class-action lawsuit is going, talk about the Green Gimping on that one!

July 7, 2016 | 06:51 PM - Posted by Anonymous Nvidia User (not verified)

Again with the 3.5 gig BS. It has a full 4 gigs in fact a benchmark of Guru 3d shows a 970 using slightly more than 4 gigs in Hitman DX12. Impossible I know.

The 970 is such a gimped card that a new Rx 480 barely beats a stock one with 2x-2.3x the ram. Except it doesn't beat a heavily overclocked one and isn't as power efficient. Yes but look at the directx 12 performance in less than 12 games total. Directx 11 has how many more LOL.

The Rx 480 will be pushing up electronic daisies before dx12 is relevant.

July 7, 2016 | 07:41 PM - Posted by Anonymous (not verified)

Nvidia's cards do not improve much over time, and Nvidia even does things to keep the older hardware gimped to keep its customers on that upgrade treadmill. Even some of AMD's older SKUs will benefit from DX12 and VR/Async-compute. Just look at the raw compute stripped out of Nvidia's SKUs to see where Nvidia is getting it's power savings from, and it has nothing to do with Nvidia's consumer GPU micro-architecture, as the Nvidia hardware stripping for power metrics continues unabated. AMD's GPU cores have more hardware resources so the power usage will be higher, but wait until the optimized DX12/Vulkan Games/VR Games make good use of AMD's GPU async-compute, and better computatainal compute for Gaming and other graphics, and non graphics uses.

Those CPU like AMD GCN ACE units will be good for a lot of different GPU acceleration tasks like gaming physics and Ray Tracing acceleration done on AMD's GPUs. Nvidia strips out it's compute and forces its users into its very costly Pro GPU solutions while AMD has more compute remaining in its consumer SKUs for users to take advantage of. Nvidia is all about the milking and its customers are the cows, and Nvidia loves its cash cows cooked up real well. The Green Gimping continues as usual, and the fools rush in with their payday loans in hand to be overcharged to the max.

Nvidia says this about dealing with its Cows: Don't try to understand them just fleece them up and scam them...

July 7, 2016 | 07:45 PM - Posted by Anonymous (not verified)

edit: computatainal

to: computational

WTF LibreOffice that dictionary needs some work!

July 8, 2016 | 03:09 PM - Posted by Jdwii (not verified)

This is slightly true but as always you get a fanboy going a bit to far. Get over it a 480rx performs like a 970 despite being on 14nm instead of 28nm. The 480rx is an amazing card for the money BUT as I expected awhile ago Polaris does not come close to Nvidia in terms of having the most efficient architecture. I however will recommend the 480rx for quite a bit of builds

July 8, 2016 | 04:54 PM - Posted by Anonymous Nvidia User (not verified)

I guess you don't remember the hd 6000 series of cards that weren't supported well by AMD once the node change to 28nm. Those cards were obsolete one year after being made.

Once cards were made GCN architecture, any driver update raises performance on all older GCN cards as well. Don't think that AMD is doing this for your benefit. It's always cheaper to build on same architecture than come out with something new. They only did this because of debt and lower R and D budget. You're deluding yourself if you think otherwise. AMD hurts their bottom line because if you can have an older card that performs well there is less of a need to upgrade. But it is cheaper to make and manufacture so if they sell less they can still make money.

Polaris is a decent card but it is still GCN so it's nothing new really aside from some VR support and primitive rasterization. Can't really massively improve efficiency when it is still GCN. It's sad that Polaris is 14nm and Pascal 16nm but Pascal is way more energy efficient. It's good efficiency for an AMD card but it only comes through node shrinkage from 28nm and maybe some slight tweak of the architecture.

When you don't get anything new (innovation) with AMD, you're actually the ones being fleeced by paying your hard earned cash for the same old same old.

AMD is purposely stagnating the graphics market by utilizing their console dominance. The console core is based on hd 7850 or 7870. Very old indeed. Directx was basically tailored to their cards because of their relationship with Microsoft and their Xbone system.

Wait until some of the other feature of dx12 get supported that Nvidia has in their architecture. GCN will feel it's age then.

You milk a cow and fleece sheep. LOL

July 7, 2016 | 04:41 PM - Posted by StephanS

I have an RX 480 4GB coming...
AMD response time and solution satisfied my initial anger.

So now, just crossing fingers that my card can undervolt decently.

Now, the GTX 1060 is still interesting at $250, if a blower version is available. ($299 for the reference is a bit much)

The 1060 should be able to overclock pretty high in its 150w limit.
possibly enough to be 20% faster.

That to say, I would have ordered a GTX 1060 is the "founder edition" was $250 and not $300.

BTW, PCPer.. how low of a voltage where you able to run your RX 480 at stock clock?

July 7, 2016 | 05:07 PM - Posted by Anonymous (not verified)

People on reddit r saying 1.3ghz at 1.075v on the luckiest silicon lottery, average undervolt is 1.1v on 1.3ghz.

Also people on (bios modder) r saying than memory strap r maxed on 2000mhz, so theres no point on overclock memory :(

July 7, 2016 | 05:54 PM - Posted by StephanS

Thanks for the data points.

LegitReview made it seem like 1.05v was 'easy'

They reported only about a ~15w saving going from 1.15 to 1.05 So 10% lower voltage, result in 10% lower power usage.

I wonder if someone as a breakdown of power usage. Some say the fan can use significant power at high speed...

Another data point, the silicon doesn't seem to be heat sensitive.
It doesn't work any better at lower temp.

Anyways, looking forward to play with card to find its sweet spot.

July 7, 2016 | 08:11 PM - Posted by Anonymous (not verified)

As AMD/GF's(licensed from Samsung) 14nm process node improves over time expect that RX 480 silicon will have more power usage improvements and better clocks. All silicon process are heat sensitive so there is some variable you are missing or not understanding. Leakage increases with heat and any under-volting improvements are probably dew to less power throttling and higher average clocks/lower clock variance at the lower voltage points.

There are a lot of speed/temperature/electrical software/hardware control loops interacting on the RX 480, so things can appear counter-intuitive. A GPU's regulation/governing is a very complex stochastic process involving thousands of lines of firmware/software code with hardware based control systems, so the tweaking will go on with the RX480 until its replacement is announced and on the market, and probably a good while after that should it be needed.

July 8, 2016 | 07:51 AM - Posted by Olle P (not verified)

If saving power is the objective you need to lower the thermal/power threshold. That will reduce overall power consumption by keeping the clock speeds down.

Then reducing the voltage will pull performance back upwards, as it allows the GPU to run faster by using less power per clock cycle.

July 7, 2016 | 04:45 PM - Posted by Anonymous (not verified)

Yet you hear none regarding nvidia fuckups. Especially not years and years later.

July 7, 2016 | 09:10 PM - Posted by Sid Vicious (not verified)

I know's crazy. Feels just like the Clinton's VS the average US citizen. I bring this up because some joker posted up top about republicans frothing about her email system. WELL...if MY company had even lost !1! ITAR email to a hack or negligence, the owner would be in prison for 10 years and we would be out of business. Little bit of a double standard? Yes, I do think so.

That doesn't get AMD out of the park when they mess up. Each company should be held to the same standard. Unfortunately there are a lot of politics involved. oh and fanboiz. Or however you spell it.

July 8, 2016 | 03:12 PM - Posted by Jdwii (not verified)

Meh politics should stay out of this site and I wish my life to since anyone who claims either party is good is wrong and anyone who likes Hilary or Trump is wrong.

July 8, 2016 | 05:19 PM - Posted by Anonymous Nvidia User (not verified)

That's what AMD fanboys are for. They never forget anything just like a nagging wife.

Even if 970 actually did only have 3.5 gigs, it doesn't matter as it was the best selling video card in history because of performance/price ratio.

I don't get where everything is swept under a rug. Most of their blunders are minor. If you read an Nvidia driver log you know what open issues are and see when they fix things too. Gotta love that transparency.

AMD flat out lied about Rx 480 TDP. Wait is it 110 watts. Nope only GPU. Is it 150. Nope again even in compatibility mode it still is not hard locked at 150 watts.

The Pascal cards are at TDP because they throttle to maintain it. When you advertise something better live up to it.

AMD says that the 470 is going to be the 2.5 times more energy efficient Polaris but their roadmap advertises the whole Polaris architecture.

AMD is constantly ramping up their cards and power consumption to stay competitive with Nvidia's last gen cards at stock settings. A more efficient architecture usually allows a decent overclock as well. If you compare max overclocked card to same. The Nvidia is usually on top and does it with less wattage.

Maybe benchmarks should be locked at the same wattage. What would you think of AMD's great benchmarks then?

July 7, 2016 | 05:08 PM - Posted by leszy (not verified)

It reminds me of FreeSync Ghosting sh.. storm. Congrats.

July 7, 2016 | 05:17 PM - Posted by JohnGR

Found problem in AMD hardware

AMD fixes it in 2-7 days.

Everyone blames AMD for being incompetent.

Found problem in Nvidia hardware

Nvidia says it will fix it in next drivers. Usually they don't. Months latter users ask if Nvidia have forgotten them,32...

Everyone praising Nvidia for being the perfect company

July 7, 2016 | 10:57 PM - Posted by svnowviwvn

AMD did NOT fix the non-compliance on the PCI-e connector.

Quote: With the original launch driver we saw the PEG slot pulling 6.8A or more, with the 6-pin pulling closer to 6.6A. On 16.7.1 the PEG slot draw rate drops to 6.1-6.2A. Again, that is still above the 5.5A rated maximum for the slot, but the drop is significant.

So yes AMD deserves all the blame for releasing this non-compliant card in the first place. And their so-called fix is still 12% over the 5.5A max.

Even in compatibility mode it is still non-compliant.

Quote: Current still doesn’t make it down to 5.5A in our testing, but the PEG slot is now pulling 5.75A in our worst case scenario, more than a full amp lower than measured with the 16.6.2 launch driver.

July 8, 2016 | 02:31 AM - Posted by JohnGR

You can quote me again if EVER Nvidia fixes one of those bugs. Some go back for years, from Maxwell, others are just only one month old.

July 8, 2016 | 11:15 AM - Posted by Batismul (not verified)

Had the most issues with AMD hardware in internet cafes here and tired of fixing and tweaking them to work properly. Eventually they ditched those systems and upgraded to INTEL and Nvidia - Fermi, Kepler and maxwell only and they making more profits now, hardly ever have to repair those gaming pc's.
You're welcome to visit Thailand and see the many gaming internet cafes running these systems day & night FYI

July 8, 2016 | 12:56 PM - Posted by JohnGR

Fan problems on 1080 cards.

DVI problem on Pascal.

Vive problems on Pascal.,32...

High idle power problems on Pascal cards.

DPC latency and stuttering problems on Pascal.

There where also people talking about throttling problems on Pascal.

Enjoy your "problem free" Nvidia cards. Especially that power consumption problem at idle is perfect to drive the power bill of an internet cafe to the roof.

July 8, 2016 | 06:12 PM - Posted by Nintex (not verified)

Non issue for our GTX 1080's. And anybody with a Vive uses that on the HDMI port and DP goes to your monitor. If you're still using fucking DVI you have no business buying or owning such high en VR gear!
The very few and I mean FEW cases is not wide spread as the RX 480 powergate issue is Mr. AMD fanboy trying to damage control and expose a reputable tech site like PCPER. Pls get IP banned already making the comments section even more toxic than it already is...

July 8, 2016 | 06:14 PM - Posted by Batismul (not verified)

Hah talking about powerbill, by using all Nvidia and Intel that got lowered significantly over the junk AMD hardware we got rid of!

We are saving on power and repair, how about that double combo right there.

Your head must be so far up AMD's ass its not even funny anymore

July 8, 2016 | 06:34 PM - Posted by Anonymous Nvidia User (not verified)

Play one 1080 p video on a Radeon and your power consumption is through the roof. AMD finally addressed this after what 4 years with Polaris. Supposedly got a 30% reduction but more is still more. Still pulls an impressive 39 watts vs 7 watts for Pascal.
Others are substantially higher.

How about multimonitor 40 watts for Rx 480 vs 10 watt range for most Nvidias.

Sorry John. At idle Pascal is 6 watts and Polaris is 15 watts.
Similar node tech 16nm vs 14nm. You'd think a mainstream Polaris card would beat high end Nvidia with 2nm bigger node. Nope again.

Oh. My bad still talking about monitors above 120hz. Isn't really that big of an issue. Just lock monitor to 120hz and the problem suddenly isn't a problem. You'll be counting the money you'll be saving.

Running a gaming cafe where gaming consumption is usually substantially less on the Nvidias are more efficient is going to save them lots more money.

July 8, 2016 | 05:31 PM - Posted by Anonymous Nvidia User (not verified)

Pascal just came out a little while back. You gotta give them some time to fix things.

Give Polaris time. I'm sure more issues will pop up there. Especially with Vega as it's supposed to be new architecture, right?

After all Windows is still fixing bugs up until the time it goes to nonsupport status.

Everyone has problems. Some can't be fixed easily without hardware revision.

I'm glad AMD was able to defuse their problem some. However it is still drawing amperage over spec if I understand correctly.

July 7, 2016 | 05:23 PM - Posted by Lance Ripplinger (not verified)

So does this mean that the engineers will likely be tweaking the PCB and we will see a new revision of it in future stock of the RX480?

July 7, 2016 | 05:31 PM - Posted by JohnGR

Probably. But companies and users will turn their focus on custom cards that will be offering at least an 8pin connector. So, even if they come up with another revision, people will keep avoiding the reference design. We are not talking hi end stuff here. A difference of $10-$20 between the reference and the custom model, will send everyone who knows what a GPU is, to the custom model.

July 8, 2016 | 05:38 PM - Posted by Jon Jon (not verified)

I've never been a person to buy reference cards anyway.

It just seems like a big waste when you get far better performance and temperatures out of the third party cards.

I am personally excited to see how the ASUS STRIX RX 480 performs, as this really entices me to upgrade my ASUS STRIX R9 380 so I can have 60FPS max setting 1080P gaming (I just game on my 60 inch TV with my steam controller or xbox controller depending on the game).

July 7, 2016 | 05:28 PM - Posted by JohnGR

So, Allyn, Ryan,

are you going to test a GTX 950 with NO power connector under overclocking and tell us if that card draws 85-90W from the PCIe bus - having no other power source to turn too - or we don't want to spoil Nvidia's image?

In TechPowerUp they measured 74W at defaults. After overclocking they got 20% extra performance. Performance costs in energy. It's not free. 20% extra performance best case scenario means 20% extra power. That's 90W from the PCIe bus.

July 7, 2016 | 05:31 PM - Posted by Anonymous (not verified)

did he not say they tested it laready and it's not as ''bad'' as the rx 480 problem.

July 7, 2016 | 05:43 PM - Posted by JohnGR

People where pointing at GTX 960 Strix as having problems. NOT GTX 950 with NO power connector.

I don't know who was the moron who started all the fuss about the Strix. GTX 960 comes with a TDP close to 120W and also is equipped with a 6pin PCIe power connector. So even if it produces spikes, the average will always be much lower than the limits.

On the other hand GTX 950 with no extra PCIe power connector, is at it's limits of 75W at normal frequencies. If the card was limited by the manufacturer, it would throttle and you wouldn't get significant performance gains even after overclocking it. But at TechPowerUp they measured 20% extra performance, so the card is not limited. It will ask more power from the PCIe bus and it will get it.

The thing here is that tech sites are losing the chance to write an article and warn users, to educate users, that overclocking can push many cards outside the limits. While AMD messed up big time, many don't realize that they are running their bus outside it's limits, even without owning an RX 480.

July 7, 2016 | 07:10 PM - Posted by Keif (not verified)

What are you talking about? It's 3% slower than the reference 950 in their Performance Summary and the average power consumption during gaming is 75w.

July 8, 2016 | 02:22 AM - Posted by JohnGR

75W at gaming. Overclock it and you go at 90W. What is it that makes it difficult for you to understand? And don't say it doesn't go at 90W, because 20% performance doesn't come free.

July 8, 2016 | 04:23 AM - Posted by MEmememe (not verified)

Yeah even the 750 ti used stock 92w through that pci-e slot according to guru3d. Overclocked 950 and 750 ti w/o the damn 6pin would be 90-100w.

Now AMD is 79w on old 76w on new and 71 on compatibility and that is somehow an issue then. Ok.

July 8, 2016 | 08:10 AM - Posted by JohnGR

750 Ti is a 60W TDP card. 90W at peak isn't something really serious. Typical power draw will be under or close to the 75W limit.

But in the case of GTX 950 with no power connector, that 90W will be constant, typical power draw, not just peak, from the PCIe slot. The GTX 950 with NO power connector will be stressing the pcie bus as much, if not more, as the RX 480, under overclocking.

July 8, 2016 | 11:22 AM - Posted by Keif (not verified)

They didn't increase voltage. Does power consumption increase with the same voltage but higher clocks speeds?

And exceeding when overclocked is a completely different story than exceeding at stock.

July 8, 2016 | 02:08 PM - Posted by JohnGR

Yes it does. And that's also the case with CPUs. So if your motherboard is barely covering the TDP of your processor, don't just go blindly and overclock it to sky high, thinking that not touching the voltage will be OK. You could end up with a dead motherboard.

If you search in google you will find mathematical types that try to roughly calculate the power consumption of a chip based on it's frequency and voltage.

By increasing frequency you are increasing voltage linearly. This means that, roughly speaking, a 20% increase in frequency will also move the power consumption 20% up.

Of course when you are increasing voltage, things are much worst there with power consumption. An increase in voltage increases power consumption exponentially, so the power consumption in that case goes much faster higher.

July 8, 2016 | 02:56 PM - Posted by Keif (not verified)

Well it's weird that in Techpowerup's temperature testing was there no increase in temperatures with the overclock. It's possible the fan speeds are targeted to maintain 69c but it's unknown and assuming.

You're making bold accusations with no solid evidence.

It's also interesting w1z never made note of this in his review. Is he an Nvidia shill too?

And again, since you ignored it, overclocking power consumption is a whole different story. Nvidia got the most out of the PCI specification; what users decide to do with it is on them.

July 8, 2016 | 03:37 PM - Posted by JohnGR

Good point about the temperature. In graphics cards fan speed will change based on temperature as we know. What could be different under OCing is the time it takes the card to reach that temperature and how much time it stays at that temperature. With no overclocking the card could be just touching that temperature and dropping really fast after the fans start spinning faster. Under overclocking the card could be staying much more time at that temperature. The cooling system is big enough to cope with the extra power consumption.

I am not making bold accusations. If there was a review proving me, everyone would be talking about obvious expectations. A card at 74W gets 20% extra performance under overclocking. It's at least naive and wishful thinking to think that 20% more PERFORMANCE comes for free.

W1z is not obliged to start talking about what the card does under overclocking in details. He is just testing a card at defaults and adds a little overclocking in the mix, because people will want to see a page about overclocking. If he doesn't put an OC page in his reviews he will get dozens of posts asking for it. But at the same time, no one is forcing him to repeat the entire review, with the card overclocked.

And please don't end your post with a lie. From the first time I was saying that AMD messed up at defaults and that overclocking was a different matter. Allyn was ultra happy to agree with me back then.

See my post and his here

That doesn't change the fact that many will think that this is an RX 480 matter that doesn't affect their hardware. Well, probably it does. People just don't know it, and unfortunately with press not caring, they will never find out.

July 8, 2016 | 05:52 PM - Posted by Jon Jon (not verified)

I have been going through a lot of these posts, and it is making my brain hurt reading how people don't understand that higher frequency -> more power draw -> more heat -> higher risk of instability.

This is just how it works.

This is why you need to get "OC Friendly" power supplies and motherboards.

How people can't seem to put the two together doesn't make sense.

The reference RX480s sipping too much power from the PCI-E lane at stock was a problem, and AMD corrected it via drivers.

However, nobody should be expecting to overclock ANYTHING on a budget motherboard and power supply, but they should be able to expect to run equipment at stock in the board.

That would be like me getting mad if I got a $50 budget motherboard and I tried a modest overclock on my CPU and the computer BSODs because it isn't stable due to the increased power requirement.

It's honestly a joke to me.

July 8, 2016 | 05:43 PM - Posted by Anonymous Nvidia User (not verified)

Overclocking is done at user's own risk. This is not standard and doesn't have to be in spec. If anything burns out it is their fault.

You can add another 75 watts of power at least to Polaris with +50% power limit. Where would their numbers go. Through the roof. Nvidia's are limited to +20% power at most over spec. I think most modern hardware has at least 25% overhead built in, if not more. 50% overhead maybe not.

Does the 950 hit 90 watts with standard boost? Polaris was over spec with standard boost.

July 8, 2016 | 05:55 PM - Posted by JohnGR

Never said they are the same cases. In every post I say that AMD messed up.
But I am also saying that the press shouldn't treat it as an isolated/unique case and inform users who overclock that maybe they are also going over specs. Most people would never think about power usage from the pcie bus when overclocking a graphics card. "Temps are fine, 3D Mark doesn't crush, so everything is OK". Well, maybe not.

July 8, 2016 | 06:57 PM - Posted by Anonymous Nvidia User (not verified)

I'll agree with you here John. If they don't get any artifacting . They deem it max overclock and OK. No one (reviewer) is paying attention as to whether they are over PCI express spec or not.

I usually don't overclock my video cards anyway stock is usually good enough. Overclocking generally isn't worth the extra 10% or so. You are shortening the lifespan of your components due to the excess heat from increased power draw.

July 7, 2016 | 07:10 PM - Posted by jabbadap (not verified)

Well nvidia's cards are limited by tdp setting, which it quite precisely follows(feedback loop). I don't know how w1zzard oc it, but if he did not touch tdp percentages, bios would try to restrict it to given stock power limit(nvidia overclocking is complicated as hell now-a-days). At least he said voltages during 1447MHz was 1.08V which is rather low. But yeah you are right reviewer should always warn people not to OC card, if doing so will seriously exceed pcie slot amperage specs.

July 8, 2016 | 02:29 AM - Posted by JohnGR

BIOS in this case would NOT try to restrict the card, that's why he gets 20% extra performance. No matter how complicated Nvidia's overclocking is, there is one simple fact. You can't get 20% extra performance with no extra power consumption. That 1.08v is necessary to bring power consumption down to 75W while running at the default clocks. W1z overclocks both the GPU and the GDDR5 to get that 20%. The card definitely gets over 80W, probably at 90W, based on the performance difference.
But Ryan will be happy to hide an Nvidia problem as much as happy to analyze an AMD problem. As I said in another post, Raja much be the most stupid person in the industry, giving exclusives to a site that is in bed with Nvidia.

July 8, 2016 | 11:36 AM - Posted by Batismul (not verified)

Poor you, damage controlling for AMD and not getting a cent for it
Busy few weeks since the launch of this card huh
Keep it up and soon you won't be able to post here anymore because of a perma ban or IP ban you sad pathetic individual.

July 8, 2016 | 05:57 PM - Posted by JohnGR

Well I guess you feel really great with your ignorance, so I will not try to spoil it. Enjoy it.

July 8, 2016 | 06:17 PM - Posted by Nintex (not verified)

Talking about ignorant LOL Hypocrite much? Moron...

July 8, 2016 | 12:16 PM - Posted by Allyn Malventano

The 480 was drawing as high as >50% over at STOCK settings. The 950 sure as hell didn't do that. Sure we can test it, but I'd imagine it couldn't be any worse than the 480 is post-fix. You just have to have AMD better than NV in every possible way in your own mind don't you?

July 8, 2016 | 01:45 PM - Posted by JohnGR

Oh, spare me the lecture and the hypocrisy. You where promising to check it the last time, remember? You where so happy that the AMD fanboy was giving NO excuses to AMD that you where promising to check it. Now you IMAGINE? Oh come on. We both know you are NOT going to do it, because probably I am right. A card that typically consumes 74W at defaults while playing, with peaks at 79W, and gives you 20% extra performance under overclocking it will NOT keep consuming 74W. It will jump at least 10-15W higher with peaks probably reaching 95-100W.

Yes, some people where pointing at the Strix card, a card with 120W TDP and an extra 6pin connector. What better card to show that Nvidia's cards doesn't go over the limits? I wonder who was the clever guy pointing at a 120W TDP card that was getting at least 150W power from two different sources as having the same problems with RX 480.

And no, when I was saying the last time that AMD messed up, I wasn't trying to make AMD look better. And yes AMD's card is out of limits at defaults and that's why everybody blamed them, ME INCLUDED. But people will learn NOTHING from this. They will keep thinking that this was an AMD case that doesn't affects them. At the same time their highly overclocked card could be sucking close to 90W power constantly from the PCIe bus while playing, not having anywhere else to turn to for the needed power.

July 8, 2016 | 07:09 PM - Posted by Anonymous Nvidia User (not verified)

Here John is Asus model from techpowerup. 950 consumes 79 watts maximum 4 watts over spec wow. I'd imagine average is way lower though. Even if 4 watts sustained, I don't think it would be a big issue.

All of the 950 cards I've come across are partner cards. How would liability come back to Nvidia.

July 8, 2016 | 06:19 PM - Posted by Nintex (not verified)

Why even bother with this moron anyway. He is just that ignorant butthurt fanboy that won't let go. Probably has multiple accounts on other tech sites doing the same shit.

July 7, 2016 | 05:34 PM - Posted by Anonymous (not verified)

NVidia pays people to troll tech sites. Maybe NVidia finds fans who are brain dead and easily controlled.

When DX12 games come out in numbers in the next 6 months, NVidia trolls will cry. Software cheating tricks will not help NVidia fans with DX12 games.

It is going to be a very interesting situation.

July 7, 2016 | 09:18 PM - Posted by Sid Vicious (not verified)

Nvidia "politicians" pay people to troll tech "web" sites. Maybe Nvidia "politicians" find fans "sheeple" who are brain dead and easily controlled.

Fixed that there for yah. People are waking up, but not fast enough.

In my view, the 1060 and this whole "FE" jazz is absolutely garbage and a way to soak users for another 50$. But, it's working. See above about "Sheeple". Nvidia has a superior product, for sure. And they are run better/more professionally. But, come on, swing for the little guy every once in a while! It really builds up $ex appeal! If they had priced this around 25$ lower and left the clocks down so it competed directly with the 480 but could OC to the MOON, they could have a really awesome product that generated a ton of buzz.

July 8, 2016 | 01:24 AM - Posted by Anonymous (not verified)

These NVidia fans are liars. They are not trust worthy. Here is why?

"MSI GeForce GTX 1080 DirectX 12 GTX 1080 SEA HAWK EK X 8GB 256-Bit GDDR5X PCI Express 3.0 x16 HDCP Ready SLI Support ATX Video Card"

"OUT OF STOCK PNY GeForce GTX 1080 Founders Edition 8GB GDDR5 PCI Express 3.0 Graphics Card VCGGTX10808PB-CG"

I also suspect NVidia marketing people prey on gullible NVidia fan boys.

July 7, 2016 | 05:35 PM - Posted by nitrooo (not verified)

How about XFX RX 480 over clocked editions? Did anyone test them? Any links?

July 7, 2016 | 05:43 PM - Posted by Tony Morrow (not verified)

Now I'm curious how the power phases in other cards are wired up. The r9 390 and gtx 960 didn't show nearly as much power fluctuation from the PCIe connections like the RX 480 did. Their graphs were practically flat while the external power connector picked up the rest of the load.

July 7, 2016 | 07:40 PM - Posted by Hotcooler (not verified)

From what I've heard from people doing hardware voltmods, usually VRAM is wired up to PCI-E and pretty much everything else usually takes power from 6/8pin connectors.

July 7, 2016 | 09:29 PM - Posted by Tony Morrow (not verified)

And based on the Allyn's picture the RX 480 looks to be the opposite with VRAM being powered by the 6-Pin connection.

Glad to see AMD has mitigated this power issue. My 480 is expected to arrive Friday to replace a 6870.

July 7, 2016 | 07:03 PM - Posted by tolerance (not verified)

In my opinion, AMD's PR initially created that power misconfiguration issue, hoping (or, already having the needed reviewers in place) that some reviewers would have spotted it and, consequently, fired the alarm to the masses. After such events happened, the PR came out reassuring everybody that they would fix it soon, and indeed did they fix it. Providing the fix creates a positive effect that the red team is really caring about their audience; that we are of an outermost importance to them. But what actually was happening is AMD "programming" the multitude's minds. They are trying to have more folks joined their bandwagon; trying to pull, figuratively speaking, the "market blanket" to their side.

July 7, 2016 | 10:06 PM - Posted by Anonymous (not verified)

a lot of people canceled their rx480 orders after hearing about this issue?

July 8, 2016 | 08:41 AM - Posted by Anonymous (not verified)

Nope, still sold out in most parts of the world. :(

July 7, 2016 | 10:32 PM - Posted by Hitman187 (not verified)

I was going to save money and just upgrade to a RX 480. I bought a R9 270 expecting to upgrade 6 months later, but ended up putting it off more and more until this last generation of GPU's. The 480 would have been perfectly fine for what I need, especially at $200, but after the whole power thing, I decided to go with the GTX 1070. I know I'm not the only person that did the same.

I've always bought AMD since I started building PC's in 2010, but now I'm going to be Green and Blue for the first time ever =/.

I can assure you, AMD isn't dumb enough to do what you're suggesting.

July 8, 2016 | 03:16 PM - Posted by Jdwii (not verified)

Please tell me this is sarcasm if not then I'll say that's sad

July 7, 2016 | 07:11 PM - Posted by John Pombrio

How is overclocking the RX 480 going to affect the power draw of the Rx 480 card? One site found a 122 watt average 12 volt power draw from the 71.28 Watt PCI 12 volt max spec while running a benchmark at a minor 1300 overclock. The new drivers and compatibility mode leaves scant room for any more power draw from the reference card. Custom cards will be running even more to the edge with factory overclocked cards and will need to do a lot more than just let the users run the new AMD driver. They will need to rewrite the Video BIOS at a minimum and balance the cards with more precision. It is not heat that is the 480's limiting factor in overclocking, it is the power needed to do it.

July 7, 2016 | 08:40 PM - Posted by Simon (not verified)

You don't buy reference cards for overclocking, not if you're serious about it. Much less AMD reference cards. Now I'm just glad this driver fix will make it next to impossible for AIBs with better 2x6pin, 8pin and 8+6pin setups to go over spec. So this is essentially a non-issue.

July 7, 2016 | 07:32 PM - Posted by Keif (not verified)

Phew. The internet can rest again! And hugely thanks to you guys! Cheers!

While I'm not concerned about the PCIE power draw and I'm not even interested in a reference 480, it is something that I will take into consideration when buying a card in the next few weeks and I'll be keeping on eye on how partner 480's handle it.

It's about peace of mind for me and not having to be concerned with it. The larger the margin the better. Not much different than how everyone runs their GPUs and Intel CPUs at 70c when they can run at 90c and above.

July 7, 2016 | 10:03 PM - Posted by Anonymous (not verified)

great investigation by pcper
great response by AMD
this is how it should be

July 8, 2016 | 12:19 PM - Posted by Allyn Malventano

Exactly. Just imagine how much easier it would be to do our jobs if more than half of the comments were not accusations by fanboys. We reported on a thing. They fixed a thing. Clearly it was a thing. End of story. 

July 8, 2016 | 06:56 PM - Posted by Anonymous (not verified)

when you stop the bias toward Nvidia maybe the accusations will stop also.

July 8, 2016 | 07:25 PM - Posted by Anonymous Nvidia User (not verified)

Still the accusations of Nvidia bias. You guys are one of the least biased sites in my opinion. Keep up the good work. I guess since I prefer Nvidia my opinion is biased. I don't hate AMD like AMD fanboys hate Nvidia. Oh well.

Keep your chin up Allyn. Remember that it isn't the sword that kills the trolls it's knowledge that does it. Unshakable truth makes the trolls wither. To be honest you'll probably have to do PEG analysis of every card from now on to appease fanboys from viewing you as biased. When the 1060 comes out, they're going to demand it anyway.

July 8, 2016 | 07:54 PM - Posted by Anonymous (not verified)

nvidia user saying you're no biased it must be true.

July 9, 2016 | 09:07 AM - Posted by Allyn Malventano

We were doing it for every card any way, but didn't focus on it specifically in the 480 review (but remember we are supposedly just so set on making them look bad- oh noes!!). We'll keep doing what we were doing.

July 9, 2016 | 01:59 PM - Posted by Anonymous Nvidia User (not verified)

You are correct Allyn. Being a new user I didn't know this methodology was already your staple. I went back and saw that you used it for 1080 review. The 1080 being 2x more efficient in fps/watt than a Furyx. Wow.

Good to know that you are looking out for everyone's hardware and helped head off a potentially costly result of using Rx 480.
Did you guys get many thanks for averting "disaster"? Nope. You got called Nvidia biased, fanboys and trolls. Some people can't see the big picture. Thanks guys at PC Per for saving AMD users' hardware.

No one should have to replace their motherboard unless they want to. AMD fanboys would justify it away by saying their stuff was old and needed upgrading anyway.

You guys must be mind readers to have put this type of testing into use well before Rx 480. Just because you knew AMD was going to violate the spec. LOL

Keep up the good work. I know you guys will regardless.

July 9, 2016 | 04:00 PM - Posted by Anonymous (not verified)

Tldr:kissing pcper's and nvidia's @$$.

July 9, 2016 | 08:38 PM - Posted by Anonymous Nvidia User (not verified)

Too long didn't read but yet you knew the gist of it. OK.

You don't want to know the truth because it hurts.

Short enough for you to comprehend.

July 9, 2016 | 11:25 PM - Posted by Anonymous (not verified)

Wow you know ehat tldr means(you had to spell it out).and here i was thinking most nvidia users are morons./clap

July 10, 2016 | 07:52 PM - Posted by Anonymous Nvidia User (not verified)

I won't comment on your intelligence. If you're an AMD fanboy it's says plenty enough. Intelligent people buy the best they can afford.

AMD Fanboys blindly support a company who releases the same old wattage guzzling and heat blasting 4 year old tech (asynchronous compute and GCN) video cards and calling them new. LOL Doesn't sound smart to me.

July 7, 2016 | 11:01 PM - Posted by Anonymous (not verified)

Motherboard PEG slot isn't maintain 12v stable. I see the problem in the motherboard, if motherboard maintain 12v stable the current will drop to 5.5A.
Just look the difference under load PSU and motherboard voltages. Anyone can test with another 6-pin connector adapted in PEG's slot graphicas card to test?
Sorry, I know my english suxs. =/

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.