Review Index:
Feedback

Power Consumption Concerns on the Radeon RX 480

Author:
Manufacturer: AMD

Too much power to the people?

UPDATE (7/1/16): I have added a third page to this story that looks at the power consumption and power draw of the ASUS GeForce GTX 960 Strix card. This card was pointed out by many readers on our site and on reddit as having the same problem as the Radeon RX 480. As it turns out...not so much. Check it out!

UPDATE 2 (7/2/16): We have an official statement from AMD this morning.

As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).

Honestly, that doesn't tell us much. And AMD appears to be deflecting slightly by using words like "some RX 480 boards". I don't believe this is limited to a subset of cards, or review samples only. AMD does indicate that the 8 Gbps memory on the 8GB variant might be partially to blame - which is an interesting correlation to test out later. The company does promise a fix for the problem via a driver update on Tuesday - we'll be sure to give that a test and see what changes are measured in both performance and in power consumption.

The launch of the AMD Radeon RX 480 has generally been considered a success. Our review of the new reference card shows impressive gains in architectural efficiency, improved positioning against NVIDIA’s competing parts in the same price range as well as VR-ready gaming performance starting at $199 for the 4GB model. AMD has every right to be proud of the new product and should have this lone position until the GeForce product line brings a Pascal card down into the same price category.

If you read carefully through my review, there was some interesting data that cropped up around the power consumption and delivery on the new RX 480. Looking at our power consumption numbers, measured directly from the card, not from the wall, it was using slightly more than the 150 watt TDP it was advertised as. This was done at 1920x1080 and tested in both Rise of the Tomb Raider and The Witcher 3.

When overclocked, the results were even higher, approaching the 200 watt mark in Rise of the Tomb Raider!

A portion of the review over at Tom’s Hardware produced similar results but detailed the power consumption from the motherboard PCI Express connection versus the power provided by the 6-pin PCIe power cable. There has been a considerable amount of discussion in the community about the amount of power the RX 480 draws through the motherboard, whether it is out of spec and what kind of impact it might have on the stability or life of the PC the RX 480 is installed in.

As it turns out, we have the ability to measure the exact same kind of data, albeit through a different method than Tom’s, and wanted to see if the result we saw broke down in the same way.

Our Testing Methods

This is a complex topic so it makes sense to detail the methodology of our advanced power testing capability up front.

How do we do it? Simple in theory but surprisingly difficult in practice, we are intercepting the power being sent through the PCI Express bus as well as the ATX power connectors before they go to the graphics card and are directly measuring power draw with a 10 kHz DAQ (data acquisition) device. A huge thanks goes to Allyn for getting the setup up and running. We built a PCI Express bridge that is tapped to measure both 12V and 3.3V power and built some Corsair power cables that measure the 12V coming through those as well.

The result is data that looks like this.

View Full Size

What you are looking at here is the power measured from the GTX 1080. From time 0 to time 8 seconds or so, the system is idle, from 8 seconds to about 18 seconds Steam is starting up the title. From 18-26 seconds the game is at the menus, we load the game from 26-39 seconds and then we play through our benchmark run after that.

There are four lines drawn in the graph, the 12V and 3.3V results are from the PCI Express bus interface, while the one labeled PCIE is from the PCIE power connection from the power supply to the card. We have the ability to measure two power inputs there but because the GTX 1080 only uses a single 8-pin connector, there is only one shown here. Finally, the blue line is labeled total and is simply that: a total of the other measurements to get combined power draw and usage by the graphics card in question.

From this we can see a couple of interesting data points. First, the idle power of the GTX 1080 Founders Edition is only about 7.5 watts. Second, under a gaming load of Rise of the Tomb Raider, the card is pulling about 165-170 watts on average, though there are plenty of intermittent, spikes. Keep in mind we are sampling the power at 1000/s so this kind of behavior is more or less expected.

Different games and applications impose different loads on the GPU and can cause it to draw drastically different power. Even if a game runs slowly, it may not be drawing maximum power from the card if a certain system on the GPU (memory, shaders, ROPs) is bottlenecking other systems.

One interesting note on our data compared to what Tom’s Hardware presents – we are using a second order low pass filter to smooth out the data to make it more readable and more indicative of how power draw is handled by the components on the PCB. Tom’s story reported “maximum” power draw at 300 watts for the RX 480 and while that is technically accurate, those figures represent instantaneous power draw. That is interesting data in some circumstances, and may actually indicate other potential issues with excessively noisy power circuitry, but to us, it makes more sense to sample data at a high rate (10 kHz) but to filter it and present it more readable way that better meshes with the continuous power delivery capabilities of the system.

View Full Size

Image source: E2E Texas Instruments

An example of instantaneous voltage spikes on power supply phase changes

Some gamers have expressed concern over that “maximum” power draw of 300 watts on the RX 480 that Tom’s Hardware reported. While that power measurement is technically accurate, it doesn’t represent the continuous power draw of the hardware. Instead, that measure is a result of a high frequency data acquisition system that may take a reading at the exact moment that a power phase on the card switches. Any DC switching power supply that is riding close to a certain power level is going to exceed that on the leading edges of phase switches for some minute amount of time. This is another reason why our low pass filter on power data can help represent real-world power consumption accurately. That doesn’t mean the spikes they measure are not a potential cause for concern, that’s just not what we are focused on with our testing.

Continue reading our analysis of the power consumption concerns surrounding the Radeon RX 480!!

Setting up the Specification

Understanding complex specifications like PCI Express can be difficult, even for those of us working on hardware evaluation every day. Doing some digging, we were able to find a table that breaks things down for us.

View Full Size

We are dealing with high power PCI Express devices so we are only directly concerned with the far right column of data. For a rated 75 watt PCI Express slot, power consumption and current draw is broken down into two categories: +12V and +3.3V. The +3.3V line has a voltage tolerance of +/- 9% (3.03V – 3.597V) and has a 3A maximum current draw. Taking the voltage at the nominal 3.3V level, that results in a maximum power draw of 9.9 watts.

The +12V rail has a tolerance of +/- 8% (11.04V – 12.96V) and a maximum current draw of 5.5A, resulting in peak +12V power draw of 66 watts. The total for both +12V and +3.3V rails is 75.9 watts but noting from footer 4 at the bottom of the graph, the total should never exceed 75 watts, with either rail not extending past their current draw maximums.

Diving into the current

Let’s take a look at the data generated through our power testing and step through the information, piece by piece, so we can all understand what is going on. The graphs built by LabVIEW SignalExpress have a habit of switching around the colors of data points, so pay attention to the keys for each image.

View Full Size

Rise of the Tomb Raider (1080p) power draw, RX 480, Click to Enlarge

This graph shows Rise of the Tomb Raider running at 1080p. The yellow line up top is the total combined power consumption (in watts) calculated by adding up the power (12V and 3.3V) from the motherboard PCIe slot and the 6-pin PCIe power cable (12V). The line is hovering right at 150 watts, though we definitely see some spiking above that to 160 watts with an odd hit above 165 watts.

There is a nearly even split between the power draw of the 6-pin power connector and the motherboard PCIe connection. The blue line shows slightly higher power draw of the PCIe power cable (which is forgivable, as PSU 6-pin and 8-pin supplies are generally over-built) while the white line is the wattage drawn from the motherboard directly.

Below that is the red line for 3.3V power (only around 4-5 watts generally) and the green line (not used, only when the GPU has two 6/8-pin power connections).

View Full Size

Rise of the Tomb Raider (1080p) power draw, RX 480, Click to Enlarge

In this shot, we are using the same data but zooming on a section towards the beginning. It is easier to see our power consumption results, with the highest spike on total power nearly reaching the 170-watt mark. Keep in mind this is NOT with any kind of overclocking applied – everything is running at stock here. The blue line hits 85 watts and the white line (motherboard power) hits nearly 80 watts. PCI Express specifications state that the +12V power output through a motherboard connection shouldn’t exceed 66 watts (actually it is based on current, more on that later). Clearly, the RX 480 is beyond the edge of these limits but not to a degree where we would be concerned.

View Full Size

The Witcher 3 (1080p) power draw, RX 480, Click to Enlarge

The second game I tested before the controversy blew up was The Witcher 3, and in my testing this was a bigger draw on power than Rise of the Tomb Raider. When playing the game at 1080p it was averaging 155+ watts towards the end of the benchmark run and spiking to nearly 165 watts in a couple of instances.

View Full Size

The Witcher 3 (1080p) power draw, RX 480, Click to Enlarge

Zooming in a bit on the data we get more detail on the individual power draw from the motherboard and the PCIe 6-pin cable. The white line of the MB +12V power is going over 75 watts, but not dramatically so, while the +3.3V power is hovering just under 5 watts, for a total of ~80 watts. Power over the 6-pin connector goes above 80 watts here as well.

Video News


June 30, 2016 | 08:55 PM - Posted by Allyn Malventano

...but they don't have the same problem.

July 1, 2016 | 06:38 PM - Posted by jabbadap (not verified)

You must have a nerves of elephant, I really appreciate thorough testing you have made to help make things aware and possible to fix it in the future. Yet some people just rejects and repeats same matter again and again... Hats off.

I know nvidia has quite rigit power delivery handling through bios, which you can mess yourself with bios editors like kepler/maxwell bios tweakers(independent power capping on each power connector). In my knowledge radeons does not have anything like that in their biosses at least with earlier generation gcns(restrictions are based on gpu power "consumption"). I really hope they find the problem and can fix it.

July 2, 2016 | 08:06 AM - Posted by Peter2k (not verified)

I salute you for you're nerves of steel sir

July 4, 2016 | 12:11 AM - Posted by Anonymous (not verified)

Where the heck did you get that NV has the same damn problem, they do not. Not even close. Shut up fanboy and sit down.

June 30, 2016 | 08:20 PM - Posted by Allyn Malventano

Stock settings definitely keeps power closer to the limit. After AMD issues a fix it should be even less of an issue / no issue at all.

June 30, 2016 | 08:40 PM - Posted by markymark (not verified)

What a phenomenal article. This was fascinating and easy to read. I imagine it is quite a bit of work, it would be amazing if you could something similar for all cards.

I still think the RX480 is a good card for the price but hopefully the board partners come out with single 8 pin PCIe designs.

AMD surely tested this as well, did they not think it would be an issue? Curious as to what they were thinking and who/why the decision was made to run so close to the edge with essentially no safety margin.

June 30, 2016 | 08:56 PM - Posted by btdog

I'm confused - how did this issue get by not only AMD, but ASUS, MSi, XFX, PowerColor, Sapphire?

I mean this is discovered almost immediately after release and yet none of the companies closely involved with its production had a clue?

July 1, 2016 | 12:43 AM - Posted by Anonymous (not verified)

That's exactly what I was wondering. It seems pretty clear to me that NVidia is worrying about losing the midrange market (the largest by far), so they probably knew about this all along, and had that guy on reddit ready to make his 'thread' at the click of a button.

July 2, 2016 | 01:34 PM - Posted by WithMyGoodEyeClosed (not verified)

Yes. Indeed Nvidia has a lot to worry about, starting with why someone died because a Tesla car full of Nvidia image recognition technology didn't detect an obstacle.
Now that is really under spec.

June 30, 2016 | 09:49 PM - Posted by remc86007

Have you guys tested undervolting the card and trying to lower power draw? Does it immediately become unstable? I wonder if the certification was done under a different voltage.

July 2, 2016 | 01:24 PM - Posted by WithMyGoodEyeClosed (not verified)

That will solve the "problem" and give possible headroom for overclocking (Mhz). Maybe a better look (and tweek) at the 'WattMan' (driver level control in 'Crimson') is needed.

June 30, 2016 | 11:46 PM - Posted by Mandrake

Thanks Ryan. Great write up.

I'm curious how AMD intends to correct this through a firmware update. I have to concede I didn't think such a feat was possible.

June 30, 2016 | 11:59 PM - Posted by Anonymous (not verified)

I think it can be corrected by lowering performance. That would be the simplest answer. Sort of the way capping frame rates lower power consumption substantially.

Then you get into the whole did I really get what I paid for.

If they lower voltage limits, then that could affect clock speeds negatively but it would draw less power of course.

July 1, 2016 | 03:01 AM - Posted by Anonymous (not verified)

With dynamic clock rates these days, do you ever really know what you are getting?

June 30, 2016 | 11:57 PM - Posted by Anonymous (not verified)

What I don't understand is how come the motherboard doesn't have some sort of regulator or circuit breaker on it that limits the amount of power that can be supplied to the pcie slot or trips if your device pulls too much.

I mean it would be inconvenient if your computer shuts down, crashes, or whatever if your video card pulls too much from the pcie slot. However, that is way better than your motherboard getting fried or catching fire cause it is sending more than 75 watts through it.

This makes me wonder about the rx 460s that are being designed to pull all power from the pcie slot and nothing else.

July 1, 2016 | 12:27 AM - Posted by Allyn Malventano

The motherboard makers are just providing +12V from the ATX power supply, through the board, and to the pins of the PCIe slots. There should not be any need for regulation provided the cards abide by the spec. That's the root issue here.

July 1, 2016 | 08:27 AM - Posted by Anonymous (not verified)

A guy on youtube actually put a RX 480 in a budget older motherboard and it does actually shut down under heavy load.

https://youtu.be/rhjC_8ai7QA?t=2m

July 2, 2016 | 11:28 AM - Posted by Jann5s

This! nice find

July 2, 2016 | 05:29 PM - Posted by Anonymous (not verified)

Geeze, AMD cannot release a single product without some sort of weird controversy surrounding it... Man am I glad I avoid their hardware like the plague!!!

July 3, 2016 | 03:19 PM - Posted by Anonymous (not verified)

Do a little bit of research on Nvidia's drivers in the last six months - specifically versions 364.47, 364.51, and 364.72. Three in a row.

Make sure you also read about versions 169.75, 301.42, and 320.18.

See also the GTX 970, woodscrews, Fermi chips burning up, and on, and on, and on.....

July 1, 2016 | 12:23 AM - Posted by Anonymous (not verified)

I'm very new to all this, so I apologize if this is a dumb question.

You say this problem could easily have been avoided by designing the card with an eight-pin instead of a six-pin connector.

I believe I've seen some NVidia aftermarket cards which have different power connector configurations than their reference model.

Is it possible AMD's partners could release their own versions with eight-pin connectors to get around this issue?

July 1, 2016 | 12:26 AM - Posted by Allyn Malventano

Yes, 8-pin or even opting to draw more current from the existing 6-pin connector would be preferred over exceeding the current limit of the slot. Technically exceeding 6-pin power would also exceed the spec, but the wiring to that connector is far more capable than that supplied via the motherboard.

July 1, 2016 | 02:30 PM - Posted by herrdoktor330 (not verified)

Allyn, until such time that AMD issues a fix, do you think undervolting is a temporary fix for those of us who bought an RX 480 on launch day? Thankfully I haven't had a gaming session long enough to blow my motherboard out.

What do you think of this article as a way to mitigate damage? (Article is in German, but Google Autotranslate should work fine.)

https://www.computerbase.de/2016-06/radeon-rx-480-test/12/#abschnitt_vie...

July 1, 2016 | 02:48 PM - Posted by Allyn Malventano

So long as you have a decent / recent motherboard, I'd just shy away from overclocking at present. A few percent over the spec should be tolerable, but if your system starts shutting down randomly I'd either underclock or switch back to another GPU until a fix comes.

July 1, 2016 | 06:21 PM - Posted by Anonymous (not verified)

Can overcurrent the 6-pin melt the plastic part of the connector?

July 1, 2016 | 07:24 PM - Posted by Anonymous (not verified)

In theory it could, but the 75 W "limit" on the connector is purely a PCIe specification limit. The connector is actually rated for 260 W.

July 2, 2016 | 01:51 AM - Posted by Allyn Malventano

6-pin / 8-pin contacts are rated at 8A or 9A depending on which spec you look at. That's *way* over 75W, which is just the conservative figure PCIe placed on that part of the spec. The 5.5A slot limit is far more of a concern, as the slot pins are rated at 1.1A each, and there are only 5 +12V pins to the card.

July 1, 2016 | 12:54 AM - Posted by Anonymous (not verified)

First of all, at stock speeds with a reasonable quality motherboard, it is probably not much of an issue. With just a single 6-pin connector this card shouldn't be overclocked though. AMD should provide new firmware to draw more on the 6-pin connector, and they should probably limit the max power draw more to prevent drawing too much more that what is specified for a 6-pin connector.

The overclocking situation has gotten a bit out of hand, in my opinion. At this point, they almost should do what Intel does and sell a version explicitly for overclocking. It is more expensive to design the board to allow overclocking than to design for a specific base TDB. I am not that interested in overclocking, since I value stability over trying to squeeze a few more percent performance out of my hardware. With everyone expecting to overclock, all of the boards need more expensive power delivery circuitry, extra connectors, and such.

Also, with the clock speed being essentially, completely dynamic, depending on TDP limits and temperature limits, how can I be sure what I am even buying? Will I get the same performance as the reviewer? I am not sure. It sounds like Nvidia's GPU boost tech will actually run above the specified boost clocks, or can be configured to do so? Customers aren't guaranteed to get the same performance as the reviewer, in fact the only thing really guaranteed is the base clock. The boost clock isn't actually guaranteed either. I think there is room for a card where you just get what you pay for, and you know exactly what you are getting.

With how close this card is to the power limit, it probably should not be overclocked much at all. I would assume that the pci-e draw can be modified via firmware. I would also assume that most after market power supplies are overbuilt, but pushing this card too far could overload a single 6-pin connector quite easily. What gauge wire is usually used for these? Anyway, if you are interested in overclocking, it may be best to wait for other board makers rather than buy this reference card. I was going to wait for a quieter non-reference design anyway. These will almost certainly include more power connectors.

July 1, 2016 | 01:48 AM - Posted by StephanS

So, are rx 480 being recalled since its now well confirmed by EVERYONE ?

AMD just lost a lot with this fiasco of engineering...

BTW, this is a question I asked PCPer to ask Raja yesterday.
Did they ask ? because I didn't see the transcript.

Also.. no matter what this show that AMD R&D division is not very good and I cant believe that NO ONE, absolutely no one at AMD did this test or ask for this test to be done.

My take ? AMD is binning polaris and this batch endedup needed overvolting . The RX 480 really need to be running 10% slower to be within spec. But that would have cause benchmark / product marketing issues.

sad... AMD messed up, yet again. The bad imagine will linger for years cost them hundreds of million in sales & contracts.

The new Radeon group management is simply inept.

raja koduri should be fired over this mess... or at least not continue to get his multi million bonuses at the cost of share holder.

AMD... you are a complete and total mess. You cant blame Intel anymore for your failure at managing a business.

July 1, 2016 | 01:27 PM - Posted by Ra_V_en

How about NVIDIA getting sued for misleading about 4GB RAM on GTX 970... this fanboism is getting out of fucking control.

July 1, 2016 | 10:19 PM - Posted by Anonymous (not verified)

http://www.extremetech.com/extreme/199684-nvidia-slapped-with-class-acti...

http://www.pcworld.com/article/2887234/nvidia-hit-with-false-advertising...

Already done. The only fanboyism is coming from people who always want to bring up Nvidia. This is on AMD and only AMD. Nvidia had NOTHING to do with this. Period. Full Stop. End of story.

July 2, 2016 | 02:42 AM - Posted by arbiter

Sad part is, 970 issue results in what maybe little stuttering in video. 480 issue can result in frying your motherboard. A person with common sense would say what happened with 970 is a minor issue compared to 480's.

July 2, 2016 | 10:41 AM - Posted by Anonymous (not verified)

Pretty much. I'm not saying Nvidia is innocent, they made their bed they must lay in it. But to always bring up Nvidia when AMD screws up (or bringing up AMD when Nvidia screws up) is almost acting like a two year old. 'But Mommy, Tommy did it too!'.

July 3, 2016 | 07:47 AM - Posted by Anonymous (not verified)

Therefore if Nvidia lies about memory, this gives AMD full rigths to break PCIe specs. and damage mobos, true?

July 3, 2016 | 07:48 AM - Posted by Anonymous (not verified)

Therefore if Nvidia lies about memory, this gives AMD full rigths to break PCIe specs. and damage mobos, true?

July 4, 2016 | 12:16 AM - Posted by Anonymous (not verified)

Most certainly and you are leading the AMD fanboy charge!!!

July 4, 2016 | 12:16 AM - Posted by Anonymous (not verified)

Most certainly and you are leading the AMD fanboy charge!!!

July 2, 2016 | 02:19 PM - Posted by WithMyGoodEyeClosed (not verified)

If there's anyone responsible (read: getting payed for assuming responsability on this type of "issue") is the Technical Marketing Manager (or the like).
And this, only if the overvolting existed (I do think so) and was known by him/her.
Nevertheless, and since you know the cause by explicitly stating it as being overvolting, you also know that it makes it a non "engineering fiasco" whatsoever (you wish). Anyway, regardless of what relevant points you make with your rant, the way you've made it, clearly shows your bias.
Also, what is in fact confirmed by EVERYONE, is that AMD made the right market choice (not niche) by putting out the RX 4x0 series with such specs for such a low price. Something that AMD is known for, besides leading in innovation. Although, the best is yet to come, and you know it.

July 1, 2016 | 01:48 AM - Posted by StephanS

Thinking about it.. this mistake is so stupid so destructive.

Was it engineering sabotage ?

Because nvidia is going to reap so much benefit from this,
the Radeon group is now a laughing stock.

For god sake, the GTX 1080 is a 150w card and perform so, so much better then polaris. What is AMD problem with designing chip that run efficiently ?
And now they have slow design that consumre so much power they break the specs that motherboard vendor are concerns.

How many will honor warranty now if you used an RX 480 ?

I just cant believe all of the near thousand of engineers in the Radeon group didn't think of checking board power draw...
Specially when those board where supposed to do design finished over 6 month ago !? what are those guy doing ?

This is so stupid to have DESTROYED the Radeon group image.

While am at it. AMD marketing for polaris is retarded and so self righteous. "VR is not just for the 1%" WTF AMD !?!
I thought you fire you house of money in the marketing department?

AMD, I see that 5 year on, and after two CEO with golden parachute deployed.. you haven't changed.. at all.

I have a bad feeling about ZEN. Keller left because their was no hope.

July 2, 2016 | 02:42 AM - Posted by arbiter

gtx1070 is 150watt card, the gtx1080 is 180watts.

July 3, 2016 | 07:49 AM - Posted by Anonymous (not verified)

Sabotage? No, just plain imcompetence.

July 1, 2016 | 01:55 AM - Posted by StephanS

On the marketing side "Dont silence us, silence the GPU"

??? This is so bad its sad. No class, no thought, just BS.

And FALSE to boot. The RX 480 is a revolution in silent GPU ?
false claim might get you sued AMD...

And this is the absolute best AMD can deliver.

You guys make nvidia look like true rockstars

July 1, 2016 | 03:40 AM - Posted by Allyn Malventano

Cool your jets, brother. There's still a high chance AMD can fix this one, we just don't know how yet. More to follow soon.

July 2, 2016 | 02:44 AM - Posted by arbiter

Less there is something they can do with a bios update, seems like only option would be to limit the clocks a bit more in drivers. People that overclock well they are risking it but that is nature of overclocking.

July 2, 2016 | 08:17 AM - Posted by Peter2k (not verified)

I'm sure they can fix it

It's just the word of mouth that they can't fix afterwards
I'm seeing the same uninformed messages on gaming forums
Don't buy rx480, it's gonna fry you're board

It's just something AMD could've done without, and that could've been avoided, easily avoided
Slap a 8 pin connector on and done
Tsk
I also remember reading about AMD getting out of losing money so much, assuming these cards sale nicely
Off to a bad start, even without a 1060 to counter yet

July 1, 2016 | 03:41 AM - Posted by Tech Geek (not verified)

Being an Electronics Engineer Technologist myself, my concern for this is the connector. Typically solid unbroken circuit pathways can handle quite a lot of current because resistance is low. However whenever you rely on connector contact you add more resistance to the current path. The resistance between the card edge and contact can be affected by a few factors. First off the pressure the contact places on the card edge contact. Then you need to take into consideration the cleanliness of the mating surfaces and the total area of the mating surfaces. When I refer to cleanliness, it could be any contaminant that stops the mating surfaces from fully mating across the entire surface area of the contact. Oils, dust, metal oxides, etc. It could also be corrosion that develops on the contacts over time. Ultimately anything that reduces the surface area of the mating surfaces adds resistance, and this can change over time. Generally speaking new connections made by a recently inserted card edge are going to be the best case scenario. It is a given that after some time, the contact resistance is going to be higher than it was originally.

Now why all this discussion about resistance? As Allyn has already pointed out, Ohms Law. If the current remains constant, but the resistance increases, power loss goes up (P=I^2*R). As we know, power will result in heat, in this case since it's at the point of contact, the heat is generated at the mating surfaces of the connection. Because of the nature of the connection, there is virtually no airflow around these contacts to carry the heat away. The affect over time is that heat will accumulate and spread outwards into the connector shell. It is made of plastic and if subjected to enough heat, it could melt. Now I can only assume the spec given for PCI-E has taking into consideration worse case scenarios for contact resistance / power loss / resultant heat and spec'd the maximum current delivery through these connections.

Now the argument about average power versus maximum power. For non math / engineer types this can be difficult to understand. Granted when we say this is the maximum allowed, then that is what we should expect. However in regards to heat generated, the time above the maximum allowed is what matters. Then we would get into discussing "area under the curve". In this case the curve would be power over time. Think of it in terms of say a PWM fan. The more time the fan is turned "ON" the faster it goes, if you leave it on all the time is goes full speed. So the analogy here is the "ON" state can be equated to current being over the maximum rated value. RPM in this case could be roughly related to heat which in turn directly relates to power. So if you are consistently "ON" then you are consistently over the maximum power spec'd which means you are potentially generating more heat than the connector was designed for. Now if you lower the "ON" time it stands to reason that RPM's lower (as you know practically from a PWM fans operation) and thus from the analogy the average power is lower. It stands to reason then that any heat generated would also be less. Now of course it's a bit more complicated than this as we would have to take the amount of time spent above and below, the amount it is above and below, and now we are back to the "area under the curve" discussion.

I have seen the result of resistance in contact connections. Maybe you have too. Ever see the end of an extension cords melt while providing power to what seemingly is a normal load? This happened either because there was more current flowing than the connector was designed for (unlikely as these cords are designed for a well established standard and a circuit breaker would likely trigger first) or there was higher contact resistance than should have been there. Now in this case the good people that spec'd PCI-E power determined a maximum contact resistance and thus determined what the maximum current would be to eliminate the potential for excessive heat. Of course I don't know if this is detailed in any of the literature, or they just distilled it down to "this is the maximum current don't go past this" on their specs table.

July 1, 2016 | 09:35 AM - Posted by Anonymous (not verified)

I looked at the Molex PCIe connector datasheets. They are rated for 1.1A per pin. There are 5 12V pins = 5.5A total. So there is no safety margin whatsoever.

The 6-pin and 8-pin power connectors have very substantial safety margins. The Molex pins can 9A each.

July 2, 2016 | 02:47 AM - Posted by arbiter

If you look back to the 295x2, How many watts/amps were being pulled from 2x8pin's on that. Assuming 75watts from the pci-e 210watts to as much as 265watts per 8 pin. As the card was tested to draw most time 500+ watts and some reviewers seen as much as 600watts.

July 2, 2016 | 01:11 PM - Posted by Anonymous (not verified)

The 8-pin power connector has 3 12V pin pairs. Even if 600W are drawn from 2 8-pin connectors, you are at less than 9A per pin pair (12V / ground).

600W / 6pin pairs / 11.5V = 8.7A

July 1, 2016 | 04:00 AM - Posted by zaq (not verified)

What a disaster, this is exactly what AMD didnt want. Hope this dosnt hinder their sales much.

July 1, 2016 | 04:53 AM - Posted by John Sandlin (not verified)

Do the Voltage Regulator Modules (VRM) know what the current is they are managing the voltage for?

If the VRMs can be software controlled and are segregated between PCI-E Bus and Power Supply connectors, hopefully AMD can tell the VRMs tied to the PCI-E card slot to keep the current draw at or below the PCI-E v3 spec levels, at least on average.

I do wonder, as many others have, how they got this far with the average power draw so far above limits.

July 1, 2016 | 07:41 AM - Posted by Pale Scoot (not verified)

I'm really not an expert when it comes to electrical engineering, but that does sound like it would solve the problem. Is what you're saying essentially that a driver update could tell the card to preferentially draw more power from the 6-pin slot and less from the motherboard? My understanding is that the 75W "rating" for the PSU 6 pin cable is kind of a lowball number, and that most if not all cabling could actually handle much more power, but that it's more of an issue for the PCI slot because the power has to come from the 24-pin motherboard connector, and people don't really want to risk going too high.

Would a different solution be to just not overclock the card until a driver-side solution appears, or even to use WattMan to underclock it like 3-5%?

July 1, 2016 | 10:17 AM - Posted by Allyn Malventano

Yes, doing something like this would solve the problem. 

July 1, 2016 | 02:52 PM - Posted by Allyn Malventano

It all depends on the specific regulator design, but I suspect since they are meant to regulate the *output*, that their profiles are limited such that the input wouldn't exceed max current / power at worst case scenario conditions (-8% voltage, which would lead to higher current draw). This is likely why the other cards that don't have this issue not only run below 5.5A, they run below it by a fair margin.

July 1, 2016 | 06:34 AM - Posted by Ags1 (not verified)

I'm wondering abount the GTX 950 here. A few models have come out with no need for a 6-pin, although the launch TDP was 90W. I'm guessing this card sails pretty close to the PCIe limits too. Especially if it gets overclocked.

July 1, 2016 | 07:03 AM - Posted by Anonymous (not verified)

No, it was through binning and lowered specs.

Note, in the case of the "no extra power" GTX 950, there is absolutely no confusion of where power is coming from. It would be immediately obvious that those new GTX 950 were using well above the 65W@12V and 10W@3.3v the PCI-E allows via the bus. I can think of many miss steps leading to how RX 480 launched as is.

Why the RX 480 issue happened, we'll probably never know.

July 1, 2016 | 06:48 AM - Posted by Ags1 (not verified)

Also, PCs fail every day. Inevitably some PCs will fail in the weeks after installing the RX480 and some of those users will post viral videos complaining that the AMD damaged their computer.

July 1, 2016 | 07:10 AM - Posted by Anonymous (not verified)

Kids are getting out of school for summer break right now.

Buy an RX 480, put into older computer. Game hard during coming heat waves with probably any OC they can get away with.

It is bound to happen.

July 1, 2016 | 07:03 AM - Posted by skline00

Ryan: After this story, Raja will want that last bottle of Bourbon! Good grief!

P.S. Kentuckey is a Commonwealth.

July 1, 2016 | 07:34 AM - Posted by Pale Scoot (not verified)

I'm not so sure this power draw thing is a huge deal for people who aren't interested in overclocking. Sure, it does kind of mean that the WattMan tool that's coming with the new 480 drivers is pretty much unusable, and that's unfortunate. At least for me, though, getting a 4GB card that more or less competes at the resolution I want to game at (1080p) with other cards that are priced $50-150 higher is still a great deal. I don't care if I end up having to underclock it a little to avoid the power issue, the price to performance ratio will likely still be much better than most other cards on the market (if not all).

That said, I am kicking myself a little for not just waiting for an RX 470, which would probably have done just fine for my purposes while likely avoiding the power draw issue entirely, and for $50 less.

July 2, 2016 | 02:53 AM - Posted by arbiter

For person that has 0 interest in it then it shouldn't. Problem is they made overclocking the gpu so easy and with things like GPU boost it made overclocking pretty much all gain no loss since gpu's pretty much protect themselves from heat issues and to much voltage.

July 1, 2016 | 07:47 AM - Posted by Anonymous (not verified)

480 retail cards killing pcie slots:
https://community.amd.com/thread/202410

July 1, 2016 | 08:14 AM - Posted by Anonymous (not verified)

hardware.fr reported this problem too. their review of 480 is really good.
http://www.hardware.fr/articles/951-9/consommation-efficacite-energetiqu...

July 1, 2016 | 08:32 AM - Posted by Anonymous (not verified)

Since this will likely only effect older/cheaper motherboards, they will probably just get away with it. Who's to say your older system wasn't just "unstable" under load, etc.

July 1, 2016 | 11:22 AM - Posted by Anonymous (not verified)

To Ryan and Allyn,

I have some questions, hope you chaps don't mind:

1. Would a 4GB version of the RX 480 have less of a power draw from the PCI-E slot compared to the 8GB version?

2. If we already bought a reference 8GB card, would lowering the power limit in WattMan alleviate the excess power draw from the PCI-E slot?

3. Should we be waiting for RX 480 cards with custom PCB designs having an 8-pin or two 6-pin power connectors to be safe?

July 1, 2016 | 02:55 PM - Posted by Allyn Malventano

1. Possibly, but minimally so.

2. If it can be adjusted low enough, it should.

3. Either that or a fix from AMD, which we believe to be coming.

July 1, 2016 | 12:03 PM - Posted by spartaman64

There are some aftermarket 480 cards with a 8 pin would they still have the same issue? And how big of an issue is this would the card fry my h97 motherboard? Also amd is saying that the card passed their testing and pci-sig's testing so how is it possible that neither they nor pci-sig caught this?

July 1, 2016 | 02:54 PM - Posted by Allyn Malventano

They would only have the same issue if they stuck with the exact same regulation config being used on the reference card, which is doubtful given the attention this issue has received by AMD.

July 2, 2016 | 02:58 AM - Posted by arbiter

Since this issue only just came up and started happening could they revise and remake the boards that quick to fix it is the question. Less already from the start changed it from the get go.

July 1, 2016 | 12:36 PM - Posted by Behrouz (not verified)

Possible to reduce to 1200Mhz or lower and low Voltage ? Did you test ?

I mean What is maximum Clock and voltage If Total power shall not be higher than 150W (75w from PCI-E, 75w from PSU) ?

July 2, 2016 | 07:39 AM - Posted by Keven Harvey (not verified)

I think the most obvious way to fix this would be to lower the power limit, and then potentially lower the voltage to gain back the performance lost by lowering the power limit.

July 1, 2016 | 12:39 PM - Posted by Anonymous (not verified)

Just bought one of these not unpacked it yet, should I be returning or what!! I this fixable by a BIOS update by AMD or hardware dependent??

July 1, 2016 | 12:47 PM - Posted by Behrouz (not verified)

i think yes , it's possible to reduce clock to 1200Mhz, low voltage

July 2, 2016 | 03:01 AM - Posted by arbiter

It might be fixable by bios, Depends on how the card is setup which is only known to AIB makers and AMD if its a possible thing. Only other thing that would be 100% fix option would be to lower the clocks of the card down a little to get it under that 150 watt draw.

July 1, 2016 | 12:40 PM - Posted by Anonymous (not verified)

Reports of board takedowns, sound cards toasted, black reboots, are rolling in.

Do not use this 480 on your am2 motherboards

https://www.youtube.com/watch?v=rhjC_8ai7QA&feature=youtu.be

July 2, 2016 | 12:29 AM - Posted by -- (not verified)

yea ok...and why on gods earth are you putting an RX 480 into an AM2

BOTTLENECK MUCH?!??

LMAO.

I just installed this RX 480 8GB into my socket 939 and it totally toasted my system.

WTF GUYSSSSSSSSSSS i just wanted to play WoW on MAX!!!

July 3, 2016 | 06:40 AM - Posted by Peter2k (not verified)

It's a cheap card for cheaper systems

That's why it's an issue to beging with

Aside from that
As has been posted, it's not the board itself, the traces are fine
It's the contact pins, where the highest resistance is

So even high end boards would be affected

It's just that no one spending 350$ and up would likely buy a 480
More a Vega for instance

July 3, 2016 | 06:41 AM - Posted by Peter2k (not verified)

Takt is 350$ and up on a board

July 1, 2016 | 12:46 PM - Posted by Keif (not verified)

These analyses have been top notch. Putting that new testing equipment to good work!

I saw the youtuber Science Studio had issues with an old motherboard shutting down with the 480 you might be interested in, here: https://www.youtube.com/watch?v=rhjC_8ai7QA

July 1, 2016 | 12:51 PM - Posted by Anonymous (not verified)

AMDonkeys who bashed GTX 960 just got fked up.

July 2, 2016 | 03:04 AM - Posted by arbiter

Its possible Nvidia hit this issue in the R&D lab when testing new gpu's hence why most their cards that use 6/8pin used 30 some watts.

July 1, 2016 | 02:02 PM - Posted by Searching4Sasquatch (not verified)

https://media.makeameme.org/created/scorching-benchmarks-i.jpg

July 1, 2016 | 02:08 PM - Posted by Pablo Benítez (not verified)

Does this happen only when overclocked?

July 1, 2016 | 02:39 PM - Posted by Allyn Malventano

It's worse when overclocked, but does happen under stock clocks as well, just not as badly.

July 1, 2016 | 02:54 PM - Posted by JEREMY O (not verified)

I'm hoping they can fix this. I'm afraid to buy this card I don't want to burn out my x16 slot on my mobo is $120 h97I from asus (in 2014 price terms.) I don't think it is an el cheapo board, but it is not as expensive as a z97 in 2014. I hope it can be fixed by flashing the card or something. if all of these reference cards end up having a voltage controller on the board that is bad, then hell I will wait for saphire's custom pcb with an eight pin or buy a 1060 with an eight pin. I will not risk damaging my board a year or two from now.

July 1, 2016 | 03:08 PM - Posted by AnonomousCzech (not verified)

Pretty much confirmed that the power draw is off on the slot with the RX 480, but paraphrased from a foreign language forum I found this:

"What I find in PCPer strange:
On the same day the 960 comes as a post-test, which has supposedly only 2.5 amps on the 12V PEG (I have about 4 - as have the manufacturers with whom I have reviewed my values as well), but the RX 480 is back with 7.x ampere in the 960 review. They contradict themselves and strangely enough, no more currents are mentioned especially in the second AMD-conform measurement. Had those been identical to the first measurement, the second measurement setup is wrong and it does not change the facts."

Could you comment/explain on that please?

July 1, 2016 | 03:18 PM - Posted by Ryan Shrout

Not sure what they are referring to in contradiction...? I clearly show spikes at 7A at STOCK SETTINGS right here on the RX 480:

http://www.pcper.com/image/view/71153?return=node%2F65670

July 2, 2016 | 11:05 AM - Posted by AnonomousCzech (not verified)

@Ryan:
Someone from Tom's further elaborates (another article coming up soon apparently):
"What PCPer has however not really pursued further in the second article for RX 480: In the standard it's NOT about the wattage, but the flowing currents. And they are also without the spikes in my measurements still extremely above the specifications."

July 1, 2016 | 04:22 PM - Posted by skline00

Put a 8 pin power connector on it for goodness sake!

And Ryan, thank you for page three where you test the GTX960 Strix and confirm that it DOES NOT exceed the slot limits like the RX480.

Raja: Ball is in your court.

July 2, 2016 | 03:09 AM - Posted by arbiter

They don't need an 8pin really, the 6pin PCI-e connection even though spec is 75watts. That plug can do beyond that so could keep the 6pin and look lower power and draw say 85-100 watts from it.

July 5, 2016 | 10:34 AM - Posted by AITOR BLEDA (not verified)

I agree with you.

While rated to 75W, I am pretty confident they can pull 150W from the 6 wire connector without serious issues.

And if there are problems with it, the damaged part would be card or the cable, not the MB.
My cables can withstand more than 200W by awg, the problem as someone else has stated is the quality of the connection. Non soldered connections are always tricky, and they always get some oxide and dust with time.

It is not the same for the ones in the slot. They have smaller connection points (riskier), are essentially non fixable, and have worse cooling.

You do have to take in account that from the PS to the card, there is one connector on the 6pin and two connectors from the slot: the ATX power connector from PS to the MB and the slot connector. Both connectors have more resistance and heat up, and will drop voltage.

If you really want to reduce power consumption, you should use as much power from the 6 pin as possible, and using an 8pin connector or 2 6 pin is a great idea, less voltage drop=more efficient power use.

My wild guess is that they wanted to say "look, only one 6 pin power connector", and this was lead by marketing.

We have to consider that AMD is risking all on this, and they are working under high pressure. Someone made a stupid decission, and they launched "as is".
I am almost sure they knew they had this problem but hoped nobody would notice (as de 3.5GB vs 4GB memory issue in nvidia).

July 5, 2016 | 10:34 AM - Posted by AITOR BLEDA (not verified)

I agree with you.

While rated to 75W, I am pretty confident they can pull 150W from the 6 wire connector without serious issues.

And if there are problems with it, the damaged part would be card or the cable, not the MB.
My cables can withstand more than 200W by awg, the problem as someone else has stated is the quality of the connection. Non soldered connections are always tricky, and they always get some oxide and dust with time.

It is not the same for the ones in the slot. They have smaller connection points (riskier), are essentially non fixable, and have worse cooling.

You do have to take in account that from the PS to the card, there is one connector on the 6pin and two connectors from the slot: the ATX power connector from PS to the MB and the slot connector. Both connectors have more resistance and heat up, and will drop voltage.

If you really want to reduce power consumption, you should use as much power from the 6 pin as possible, and using an 8pin connector or 2 6 pin is a great idea, less voltage drop=more efficient power use.

My wild guess is that they wanted to say "look, only one 6 pin power connector", and this was lead by marketing.

We have to consider that AMD is risking all on this, and they are working under high pressure. Someone made a stupid decission, and they launched "as is".
I am almost sure they knew they had this problem but hoped nobody would notice (as de 3.5GB vs 4GB memory issue in nvidia).

July 5, 2016 | 10:34 AM - Posted by AITOR BLEDA (not verified)

I agree with you.

While rated to 75W, I am pretty confident they can pull 150W from the 6 wire connector without serious issues.

And if there are problems with it, the damaged part would be card or the cable, not the MB.
My cables can withstand more than 200W by awg, the problem as someone else has stated is the quality of the connection. Non soldered connections are always tricky, and they always get some oxide and dust with time.

It is not the same for the ones in the slot. They have smaller connection points (riskier), are essentially non fixable, and have worse cooling.

You do have to take in account that from the PS to the card, there is one connector on the 6pin and two connectors from the slot: the ATX power connector from PS to the MB and the slot connector. Both connectors have more resistance and heat up, and will drop voltage.

If you really want to reduce power consumption, you should use as much power from the 6 pin as possible, and using an 8pin connector or 2 6 pin is a great idea, less voltage drop=more efficient power use.

My wild guess is that they wanted to say "look, only one 6 pin power connector", and this was lead by marketing.

We have to consider that AMD is risking all on this, and they are working under high pressure. Someone made a stupid decission, and they launched "as is".
I am almost sure they knew they had this problem but hoped nobody would notice (as de 3.5GB vs 4GB memory issue in nvidia).

July 1, 2016 | 04:47 PM - Posted by schulmaster

Yes! That 960 Strix addendum is cathartic and hilarious. So often the comment sections are filled with vitriol and unsubstantiated counterarguments. It was wonderful to see article-level-scientific-method debunking of the green team bias argument(as it pertained to the 960 Strix in this case) that is prevalent every time AMD blunders.

July 1, 2016 | 05:26 PM - Posted by Anonymous Nvidia User (not verified)

Great job Ryan and Allyn. Even though I think Ryan may favor AMD, he didn't cover or play down a very real problem. I used to be an occasional reader but you guys earned my respect. You guys are definitely in my favorites now.

I may support Nvidia and have been called a fanboy, but if Nvidia did something like this I'd expect it to be outed as well. No one wants to buy a "bad" card.

Everyone please do Allyn and Ryan a favor and stop posting repeats of stuff that he already explained. Proves you didn't read article and prior posts.

July 1, 2016 | 07:10 PM - Posted by John Pombrio

Allen, thank you patiently answering the exact same question about average power and peak power spikes. I too was a electronics troubleshooter for Hewlett-Packard's Test and Measurement Division for 25 years so I know exactly what you are concerned about. I have seen browned traces, melted vias and degraded pins on card slots, mostly due to power issues (or stray lightning bolt discharges!)
Could this have been a one-of 480 card that somehow managed to draw much more power than normal due to say, overheating of the VRMs?

July 1, 2016 | 07:39 PM - Posted by Allyn Malventano

Everyone with capability to measure slot power independent from the total has noted the overload condition. I believe it's four different sites so far. Toms' method (and the way they plot it) is overly emphasizing the peaks, but they do note the average.

July 1, 2016 | 07:27 PM - Posted by Anonymous (not verified)

This isn't looking good for AMD...

July 1, 2016 | 09:21 PM - Posted by John Douglas (not verified)

I got the 480 yesterday and today my psu fried itself don't know what was damaged, if anything waiting on a new power supply but I don't think it was a coincidence, right enough my power supply wasn't a known brand so that could be it but it has ran fine for the last year on a 960

July 2, 2016 | 03:13 AM - Posted by arbiter

Yea PSU is probably most important piece in a computer build. It can make and break a computer. I sent a buddy a gtx780 I had laying around, he was using an old radeon 5000 series card. He had a super cheap 20$ chinese "680" watt PSU's. Well you guess why the wattage is in quotes. Well needless to say his machine in total with that gpu shouldn't been more then 400watts, maybe 350 at most and the 780 killed the psu.

July 2, 2016 | 08:25 AM - Posted by Keven Harvey (not verified)

I wouldn't think it's related to this issue but rather the more overall power that the card draws. Cheap PSUs are rated on max power while good ones are rated on continuous power. A cheap psu can usually only sustain about half it's rating.

July 1, 2016 | 09:30 PM - Posted by Anonymous (not verified)

Talk about foreshadowing

http://i.imgur.com/ONM9hKM.gif

July 1, 2016 | 11:23 PM - Posted by -- (not verified)

Right now all the OEMs are using AMDs reference design.... once they start creating their own PCBs and custom cooling this problem can be addressed and fixed. If it can't already with software.

July 2, 2016 | 03:15 AM - Posted by arbiter

That would be a hope if their design changes the way it works. If the AIB uses the ref card design and slaps their cooler on it, they could be looking at same problem as there is now well since its known they will have to pull those cards and redesign the card which will delay the launch.

July 2, 2016 | 09:56 PM - Posted by -- (not verified)

right now they have ZERO reason to use the ref design with another cooler.

Unless they love returns and CS working overtime.

July 1, 2016 | 11:55 PM - Posted by Anonymous (not verified)

Just feel sorry for people who have burnt out their motherboards.

If AMD had only used an 8-pin connector there wouldn't be any issue.

They left no margin for electrical safety with the 6-pin connector.

Of course, AMD has no liability as technically you should not overdraw if you don't overclock, but then you wouldn't experience the same level of performance.

July 2, 2016 | 12:26 AM - Posted by -- (not verified)

yea but to be honest the only motherboard fries are people with el-cheapo motherboards with suspect power supplies.

No solid motherboard, with a solid supply has failed.

the home grown foxcon mobo and greybox PS are the ones feeling the pain.

Leave it to the pros, not the bedroom geeks.

July 2, 2016 | 12:50 AM - Posted by Anonymous (not verified)

So as most of you know, I work in Distro, and of the 100 480's, nearly a THIRD have already been returned. Most of these are DOA, but quite a few are also doing in the field, and we can only imagine how many more will fail within then next 12-24 months. And you wonder why we do not ever suggest AMD CPU's or GPU's anymore. Utterly unacceptable failure rates so far, and it's just the beggining. I called Raja directly to get an, "Were sorry, we'll cover ALL expenses/loses that youre going to have due to this."

But was told only,"The pricing already has the higher failure rates taken into consideration." (So because something is priced a certain point, it should not work? Or what?) And was then told,"We really don't have the budget to cover even $10K-20K, let alone a 40K loss on a newly released product". No apology was given, and so my company very well might just drop AMD, again.

Rant over.

July 2, 2016 | 01:06 AM - Posted by Anonymous (not verified)

your company should have never picked up the 480 from the start if they cared about quality and reliability.

Quality and Reliability are EARNED.. i have waited for the RX to EARN its way... its failing...why would I stock that product?

Do i like looking bad? Do I like returns?

MORONS

you cry about "utterly unacceptable" rates of cards that are installed in shit systems that get fried. wow...

so now you make up this bullshit story that failure rate was built into the model...............is that weed i smell?

or a shill?

both?

ok yea

you are a fucking moron working at a fucking moron company that will eat shit while it keeps making fucking moron choices.

drop amd by all means fucking moron

they can also drop your ass....fucking moron.

July 2, 2016 | 10:43 AM - Posted by Anonymous (not verified)

So as most of you know, I work in Distro

say the guy named anonymous.
try to learn to troll, son!

July 2, 2016 | 10:44 AM - Posted by Anonymous (not verified)

So as most of you know, I work in Distro

say the guy named anonymous.
try to learn to troll, son!

July 2, 2016 | 01:02 AM - Posted by Steve_H

I'm posting this for Allyn . GO NAVY bro! retired AE here. are you sure you weren't doing more fire control stuffs LOL. Ryan.. stop letting you Bull dog take all the heat for you! hahaha. Love you guys and thanks for the contents, and try'n (I wants all the whiskey) to do the best you can for all. You'r an awesome crew. 9.99 lol

July 2, 2016 | 01:54 AM - Posted by Allyn Malventano

Thanks, brother!

July 2, 2016 | 03:37 AM - Posted by Sev (not verified)

I have a Gigabyte ga-h110m-a Mobo, would this be ok to run it or is this Mobo considered old?

July 2, 2016 | 06:28 AM - Posted by skline00

Sorry, after what I have read so far I would NOT consider this gpu, as configured until the power management issue is resolved.

July 2, 2016 | 04:27 AM - Posted by Tomas Lundin (not verified)

Thanks for a truly great explanation.

When you as a layman read the PCI EXPRESS BASE SPECIFICATION, REV. 3.0 you get very confused when you read...
"
Slot Power Limit Value – In combination with the Slot Power
Limit Scale value, specifies the upper limit on power supplied by
the slot (see Section 6.9) or by other means to the adapter.
Power limit (in Watts) is calculated by multiplying the value in
this field by the value in the Slot Power Limit Scale field except
when the Slot Power Limit Scale field equals 00b (1.0x) and Slot
Power Limit Value exceeds EFh, the following alternative
encodings are used:
F0h = 250 W Slot Power Limit
F1h = 275 W Slot Power Limit
F2h = 300 W Slot Power Limit
F3h to FFh = Reserved for Slot Power Limit values above
300 W
This register must be implemented if the Slot Implemented bit is
Set.
Writes to this register also cause the Port to send the
Set_Slot_Power_Limit Message.
"

Does the limitation here only regard what you can set trough programing and not the limitation of the hardware it self? It states 'F2h = 300 W Slot Power Limit' and slot means slot, right?

Here is the paper http://composter.com.ua/documents/PCI_Express_Base_Specification_Revisio...

July 2, 2016 | 04:43 AM - Posted by Tomas Lundin (not verified)

Sorry, it also says "or by other means to the adapter."
So in fact this information says noting regarding how much power you can pull from ONLY the PCI slot.

July 2, 2016 | 07:31 AM - Posted by Keven Harvey (not verified)

Do you think this has anything to do with the physical location of the power phases ? They're right above the power portion of the pcie connector, so the traces on the 480's pcb are longer from the 6 pin, causing more resistance, which could have been counteredd with thicker gauge wires.

July 2, 2016 | 07:44 AM - Posted by Keven Harvey (not verified)

Also, I think the most obvious "solution" to this would be to lower the power limit, and then undervolt to reduce the power needed at each P states to maintain the same performance, but, if amd could have done that they would have, so we can expect that it wouldn't work on all cards.

July 2, 2016 | 07:45 AM - Posted by Keven Harvey (not verified)

Also, I think the most obvious "solution" to this would be to lower the power limit, and then undervolt to reduce the power needed at each P states to maintain the same performance, but, if amd could have done that they would have, so we can expect that it wouldn't work on all cards.

July 2, 2016 | 08:33 AM - Posted by Pholostan

I'm probably missing something here, so sorry for being an idiot.

Chris Angelini @ Tom's Hardware wrote:
The seventh phase supplies the memory modules with power using the PCIe slot’s 3.3V rail.

So ~5 Watts for the memory? Really? Memory controller maybe?

Edit: Maybe AUX Voltage? Some say the memory VRM is on the back of the card.

July 2, 2016 | 08:58 AM - Posted by Keven Harvey (not verified)

I saw a video from an LN2 overclocker that said that the memory VRM was the one right next to the 6 pin connector, so unlikley to use 3.3V.

July 2, 2016 | 09:10 AM - Posted by Pholostan

Yes, buildzoid has identified the VRMs. They are on the back of the card and next to the connector just as you said. Maybe we watch the same LN2 overclocker :)

Maybe the 3.3V rail supply the VRM for the VRAM VDDQ (IO) voltage. That is probably a lot less power than VDD.

July 2, 2016 | 09:14 AM - Posted by Moravid (not verified)

So is the power draw an issue ONLY when overclocking/increasing the power limit?

July 2, 2016 | 10:37 AM - Posted by Anonymous (not verified)

all the guy who dont support amd are really masochist, we are in a dual word manufacturing scenario in both gpu and cpu, the role of amd is vital for all pc buyers, why some people cant understand this?
in this transiction moment could be very intresting to support (not necessary buy but write better word) amd, supporting amd is a way to decrease nvdia prince, how someone cant understand this?

July 3, 2016 | 07:18 AM - Posted by Peter2k (not verified)

No one is going to buy hardware they don't want or need just because

I've build a high end system and I want a new card that goes well with it

So what's my option?
Wait for half a year for Vega?
If history is anything to go by then all it will do is match a gtx 1080

But a month before Vega comes out Nvidia launches the ti version

Tsk
AMD needed a card that sells like crazy
This is not a good start
And it's entirely AMD's fault

July 2, 2016 | 12:03 PM - Posted by ThatSumsItUp (not verified)

Well, if the problem wasn't important then AMD would not be issuing a fix, simple as that.

July 2, 2016 | 02:09 PM - Posted by John Pombrio

Having an 8 pin connector may not be a fix for the RX 480 card. What troubles me is the exact same power draw from both the 75 watt 6 pin connector and the power draw from the PCI-E slot in the slides shown here.
Since the 6 pin connector comes directly from the PC power supply 12 volt power rails, it has no real barrier to provide much more power than the rated 75 watts. There is a sense signal pin but that is for the graphics card so that it is not fooled by plugging in a 6 pin power connector to an 8 pin socket on the card. Indeed, Toms bar chart shows the 12 line supplying 85 watts on average with a peak of 116 watts. So why doesn't the very capable power cable supply the extra needed to prevent the high power draw on the PCI-E bus?
The only answer I can come up with is that there is not a "discrimination" device to unbalance the loads from the two sources. Could it be as simple as the design of the card has three VRMs hooked to the 12 volt power cable and 3 VRMs hooked to the PCI-E bus?
With the power droop bringing the voltage down to 11.5 volts for the PCI-E bus, it may be a simple test to trace out where this power goes on the graphics card by tracking what on the card has 11.5 volts. It may not be that simple (capacitors, drop down resistors, etc) but it is worth a try.

July 2, 2016 | 03:16 PM - Posted by wissam (not verified)

you just opened a can of worms when u said you will be testing it on X MB... now people are asking you to test on all kinds of MBs including old/new server MBs,OEMs,expensive,cheap,amd/intel ... that kind of RND should have been done by the manufacturer >> anyways IF you are going to test old MBs just put in mind the degradation levels might vary depending on OCs how many cards it held how many sata ports where use you know the usual ... again can of worms :S

July 2, 2016 | 03:17 PM - Posted by Nev (not verified)

Comprehension skills Ryan? AMD promised an updated statement on Tuesday on progress, NOT an updated driver. Next you'll run the "No promised driver fix" click bait headline..

July 2, 2016 | 03:40 PM - Posted by John Pombrio

Funny, I read the exact same words as you did and I too thought that it said there would be a fix on Tuesday. Extremely nice job of verbiage by AMD as they left themselves a big out if they cannot come up with a quick solution.

July 2, 2016 | 05:27 PM - Posted by Anonymous (not verified)

Nice piece of work guys!!!!

July 2, 2016 | 05:39 PM - Posted by Zsoltyika (not verified)

Hey guys, i had the pleasure/displeasure (you decide) to watch buildzoid stream over on twitch just finished, stock-oc pretty much 83C+,watercooled stagnated at 31C flat all the testing got it up to a stable 1410MHz 50%power target. Then came the physical modding where he found out what is going on exactly with the PCB
https://www.twitch.tv/buildzoid/v/75850933 from around the 55min mark. Power draws were not measured due to lack of means
TL:DW the grounding on 6pin is not up to what it should be, half of the GPU is going from the PCIe 12v aswell without any limiter. This is a design flaw on the pcb itself aka hardware issue. This is in clear violation of industry standarts and potential safety-hazards. God knows how many cards are out in the wild, getting into systems that could face problems not in 2days time but in 3-6 months.

July 2, 2016 | 06:39 PM - Posted by ytazbddj

@Ryan Shrout & Allyn Malventano, Everyone is talking about power draw of RX 480, interestingly there was an article on wccftech.com they underclock the RX 480 and say low power usage with no performance degrade. So i was hoping u guys can do a under clock article and how it affect Power Consumption Concerns.

July 2, 2016 | 09:57 PM - Posted by Anonymous (not verified)

Attention to RYAN...
The issue has been discovered
3 of the 6pin grnds are together, no center sense pin isolated - the top 3 vrms run to the 6 pin
the bottom 3 vrms run to the pci-e fingers
They are on separate circuits and not connected

Go to https://www.twitch.tv/buildzoid/v/75850933 54 minutes in to see how it's out of spec

July 2, 2016 | 10:01 PM - Posted by mAxius

from what i read here and saw with a multi-meter and this guy https://www.twitch.tv/buildzoid

Amd was drunk when designing the power side of the card. idk how a firmware will fix this its jacked to heck and back

July 3, 2016 | 06:02 PM - Posted by John Pombrio

Anyone with a voltmeter can now find out for themselves if the RX 480 cards have half of their VRMs tied directly to the PCI-E 12 volt supply pins. Buildzoid shows us how at timecheck 54 min:
https://www.twitch.tv/buildzoid/v/75850933

July 2, 2016 | 11:03 PM - Posted by Anonymous (not verified)

Oh what a beatup. Worse than 970's 3.5Gbgate which at least had a real issue at heart, we have had a number of cards and other hardware that exceed specifications over a long time. TDP's themselves are a rubbery figure. If power draw through slots or 6-pins was hard limited we'd see almost every card unable to be overclocked

This is like the cops telling you that everybody died because you went 2kph over the speed limit. Total hyperbole

July 3, 2016 | 06:57 AM - Posted by Peter2k (not verified)

It's funny how some don't get it

No one cares this card or any other draws more power than advertised

It draws too much power where it should not, the weakest link in the chain
The PCI-E socket
That's the only issue

And at least the GTX 970 only gave you worse fps
Didn't actually damage hardware

And there's still a lawsuit going on about that advertised 4gb

July 2, 2016 | 11:40 PM - Posted by -- (not verified)

driver fix incoming......everyone please wipe your asses and put your pants back on.

If you already got an nvidia tattoo on your face...im sorry. Hopefully it matches the intel logo on your arm.

July 3, 2016 | 07:06 AM - Posted by Peter2k (not verified)

Problem is word of mouth

I saw many posts on gaming forums already saying don't buy rx480, it'll fry you're board

It's that kind of thing a fix will not make up for

July 3, 2016 | 12:03 AM - Posted by MoonSpot

Thank for the digestible testing guys. I'm still confused about why the power draw from the slot and the 6 pin seemed to have been mirroring each other, am curious it that was just this instance, a design flaw, or an error in the bios.
Look forward to coverage testing and confirmation of the patch AMD is hoping to roll out.

Thanks for keepen us posted :D

July 3, 2016 | 12:59 PM - Posted by John Pombrio

The reason why the two current lines are exactly the same is because AMD made a the RX 480 card unlike any in history. AMD tried to make the card look like it was drawing less power so instead of pulling most of the power from the 6 pin connector, it split the boards 6 power VRMs in two. Half of the VRMs are powered by the soldered together 12V lines on the 6 pin connector (making it really an 8 pin connector)while also soldering together the grounds (including the sense line- another major problem). The other three VRMs pull power only from the PCI-E slot. The cards power load comes from both so the current and wattage draw are the same for the 6 pin connector and the PCI-E slot
Unfortunately, this is a very bad mistake on AMD's part as the PCI-E slot is only rated for 66 watts on the 12 volt pins. With a TDP of 150 watts, that means the RX 480 card is already out of spec for the PCI-E AT THE CARD'S RATED TDP. Since this card is also power starved, any demanding game will pull the card into 165 Watts or so at stock clocks. When overclocked, the card draw can be as high as 195 Watts. So the PCI spec is over by 10% when running normally, 27% at stock on a demanding game, and a whopping 50% out of spec when overclocked.
AMD has misrepresented the card to the PCI approval committee which could lead to fines or the card not approved for sale. The RX 480 could also eventually hurt your motherboard as the slot pins and traces are rated for much less current than the card is pulling.

July 3, 2016 | 04:26 PM - Posted by JEREMY O (not verified)

Well that does it I'll wait for the sapphire nitro custom board. I've got a feeling the only way to fix this in software is to throttle the cards clockspeed and or voltage in the high power state and reduce performance and thereby consumption on reference cards.

This is my educated guess of what amd will do. I do not claim to be an engineer. I'm a math guy..

Thank for explaining Mr. John Pombrio.

Sapphire is making a custom pcb with an eight pin.. I think they will solder the traces right..

July 3, 2016 | 09:42 PM - Posted by Anonymous (not verified)

AMD sent the card/card samples to an independent testing lab for the PCI certification testing! And that lab has to report the results to PCI-SIG, so AMD has no influence over the certification process and AMD is out of the loop with respect to the independent lab's report. The independent testing lab has to do the testing and present its results to PCI-SIG and in that process AMD is most definitely out of the loop and can see no results other than what PCI-SIG makes known in the PCI-SIG decision process.

Any competent testing lab is not going to risk their independent certification credentials on any product that may have been improperly engineered, so that Lab probably let PCI-SIG know of any problems in their report, so it's going to be On PCI-SIG to answer for any final approval on the independent testing labs results. And you can be damn sure that the independent testing lab has the proper testing Mule for any PCI related testing of GPU cards.

July 3, 2016 | 10:13 PM - Posted by Anonymous (not verified)

VRMs on GPUs Explained:

"(Tutorial) Graphics Cards Voltage Regulator Modules (VRM) Explained"

http://www.geeks3d.com/20100504/tutorial-graphics-cards-voltage-regulato...

Also intresting project:

"Complete Disassembly of RX 480 – The Road to DIY RX 480 Hybrid"

http://www.gamersnexus.net/guides/2498-complete-disassembly-of-rx-480-an...

July 4, 2016 | 04:21 PM - Posted by MoonSpot

I'm not generally one for the looks of components, but that gamernexus hybrid was fairly horrific. There's a lot of room to improve with a more specialized kit. Hopefully EK's up and coming kit isn't a run on the bank, and that other "non-vise clamp/hovering fan on a stick" options present themselves.

July 3, 2016 | 07:42 AM - Posted by Anonymous (not verified)

Paid fanboys pretended that the RX480 issue doesn't exist, and it was all media invention.

Latter they pretended that the Nvidia GTX480 violated the spec also and that the media didn't report because is biased.

When proven wrong. They pretended that the Nvidia 750Ti violated the spec also and that the media didn't report because is biased.

When proven wrong. They pretended that the Nvidia GTX480 violated the spec also and that the media didn't report because is biased.

When proven wrong. They pretended that the Asus GTX960 Stix violated the spec also and that the media didn't report because is biased.

Now that they are also proven to be wrong, they start pretending that the GTX950 violated the spec...

When will stop this?

July 3, 2016 | 11:27 AM - Posted by ITGamerGuy

Here is my suggestion to RTG over at their forum. Think I may have gotten to Raja, you should expect a call to hammer out the details going forward ;)

https://community.amd.com/thread/202526

July 3, 2016 | 01:30 PM - Posted by Anonymous Nvidia User (not verified)

This fanboys solution hasn't even rated an answer or reply yet. Clickbait.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.