Review Index:
Feedback

Power Consumption Concerns on the Radeon RX 480

Author:
Manufacturer: AMD

Too much power to the people?

UPDATE (7/1/16): I have added a third page to this story that looks at the power consumption and power draw of the ASUS GeForce GTX 960 Strix card. This card was pointed out by many readers on our site and on reddit as having the same problem as the Radeon RX 480. As it turns out...not so much. Check it out!

UPDATE 2 (7/2/16): We have an official statement from AMD this morning.

As you know, we continuously tune our GPUs in order to maximize their performance within their given power envelopes and the speed of the memory interface, which in this case is an unprecedented 8Gbps for GDDR5. Recently, we identified select scenarios where the tuning of some RX 480 boards was not optimal. Fortunately, we can adjust the GPU's tuning via software in order to resolve this issue. We are already testing a driver that implements a fix, and we will provide an update to the community on our progress on Tuesday (July 5, 2016).

Honestly, that doesn't tell us much. And AMD appears to be deflecting slightly by using words like "some RX 480 boards". I don't believe this is limited to a subset of cards, or review samples only. AMD does indicate that the 8 Gbps memory on the 8GB variant might be partially to blame - which is an interesting correlation to test out later. The company does promise a fix for the problem via a driver update on Tuesday - we'll be sure to give that a test and see what changes are measured in both performance and in power consumption.

The launch of the AMD Radeon RX 480 has generally been considered a success. Our review of the new reference card shows impressive gains in architectural efficiency, improved positioning against NVIDIA’s competing parts in the same price range as well as VR-ready gaming performance starting at $199 for the 4GB model. AMD has every right to be proud of the new product and should have this lone position until the GeForce product line brings a Pascal card down into the same price category.

If you read carefully through my review, there was some interesting data that cropped up around the power consumption and delivery on the new RX 480. Looking at our power consumption numbers, measured directly from the card, not from the wall, it was using slightly more than the 150 watt TDP it was advertised as. This was done at 1920x1080 and tested in both Rise of the Tomb Raider and The Witcher 3.

When overclocked, the results were even higher, approaching the 200 watt mark in Rise of the Tomb Raider!

A portion of the review over at Tom’s Hardware produced similar results but detailed the power consumption from the motherboard PCI Express connection versus the power provided by the 6-pin PCIe power cable. There has been a considerable amount of discussion in the community about the amount of power the RX 480 draws through the motherboard, whether it is out of spec and what kind of impact it might have on the stability or life of the PC the RX 480 is installed in.

As it turns out, we have the ability to measure the exact same kind of data, albeit through a different method than Tom’s, and wanted to see if the result we saw broke down in the same way.

Our Testing Methods

This is a complex topic so it makes sense to detail the methodology of our advanced power testing capability up front.

How do we do it? Simple in theory but surprisingly difficult in practice, we are intercepting the power being sent through the PCI Express bus as well as the ATX power connectors before they go to the graphics card and are directly measuring power draw with a 10 kHz DAQ (data acquisition) device. A huge thanks goes to Allyn for getting the setup up and running. We built a PCI Express bridge that is tapped to measure both 12V and 3.3V power and built some Corsair power cables that measure the 12V coming through those as well.

The result is data that looks like this.

View Full Size

What you are looking at here is the power measured from the GTX 1080. From time 0 to time 8 seconds or so, the system is idle, from 8 seconds to about 18 seconds Steam is starting up the title. From 18-26 seconds the game is at the menus, we load the game from 26-39 seconds and then we play through our benchmark run after that.

There are four lines drawn in the graph, the 12V and 3.3V results are from the PCI Express bus interface, while the one labeled PCIE is from the PCIE power connection from the power supply to the card. We have the ability to measure two power inputs there but because the GTX 1080 only uses a single 8-pin connector, there is only one shown here. Finally, the blue line is labeled total and is simply that: a total of the other measurements to get combined power draw and usage by the graphics card in question.

From this we can see a couple of interesting data points. First, the idle power of the GTX 1080 Founders Edition is only about 7.5 watts. Second, under a gaming load of Rise of the Tomb Raider, the card is pulling about 165-170 watts on average, though there are plenty of intermittent, spikes. Keep in mind we are sampling the power at 1000/s so this kind of behavior is more or less expected.

Different games and applications impose different loads on the GPU and can cause it to draw drastically different power. Even if a game runs slowly, it may not be drawing maximum power from the card if a certain system on the GPU (memory, shaders, ROPs) is bottlenecking other systems.

One interesting note on our data compared to what Tom’s Hardware presents – we are using a second order low pass filter to smooth out the data to make it more readable and more indicative of how power draw is handled by the components on the PCB. Tom’s story reported “maximum” power draw at 300 watts for the RX 480 and while that is technically accurate, those figures represent instantaneous power draw. That is interesting data in some circumstances, and may actually indicate other potential issues with excessively noisy power circuitry, but to us, it makes more sense to sample data at a high rate (10 kHz) but to filter it and present it more readable way that better meshes with the continuous power delivery capabilities of the system.

View Full Size

Image source: E2E Texas Instruments

An example of instantaneous voltage spikes on power supply phase changes

Some gamers have expressed concern over that “maximum” power draw of 300 watts on the RX 480 that Tom’s Hardware reported. While that power measurement is technically accurate, it doesn’t represent the continuous power draw of the hardware. Instead, that measure is a result of a high frequency data acquisition system that may take a reading at the exact moment that a power phase on the card switches. Any DC switching power supply that is riding close to a certain power level is going to exceed that on the leading edges of phase switches for some minute amount of time. This is another reason why our low pass filter on power data can help represent real-world power consumption accurately. That doesn’t mean the spikes they measure are not a potential cause for concern, that’s just not what we are focused on with our testing.

Continue reading our analysis of the power consumption concerns surrounding the Radeon RX 480!!

Setting up the Specification

Understanding complex specifications like PCI Express can be difficult, even for those of us working on hardware evaluation every day. Doing some digging, we were able to find a table that breaks things down for us.

View Full Size

We are dealing with high power PCI Express devices so we are only directly concerned with the far right column of data. For a rated 75 watt PCI Express slot, power consumption and current draw is broken down into two categories: +12V and +3.3V. The +3.3V line has a voltage tolerance of +/- 9% (3.03V – 3.597V) and has a 3A maximum current draw. Taking the voltage at the nominal 3.3V level, that results in a maximum power draw of 9.9 watts.

The +12V rail has a tolerance of +/- 8% (11.04V – 12.96V) and a maximum current draw of 5.5A, resulting in peak +12V power draw of 66 watts. The total for both +12V and +3.3V rails is 75.9 watts but noting from footer 4 at the bottom of the graph, the total should never exceed 75 watts, with either rail not extending past their current draw maximums.

Diving into the current

Let’s take a look at the data generated through our power testing and step through the information, piece by piece, so we can all understand what is going on. The graphs built by LabVIEW SignalExpress have a habit of switching around the colors of data points, so pay attention to the keys for each image.

View Full Size

Rise of the Tomb Raider (1080p) power draw, RX 480, Click to Enlarge

This graph shows Rise of the Tomb Raider running at 1080p. The yellow line up top is the total combined power consumption (in watts) calculated by adding up the power (12V and 3.3V) from the motherboard PCIe slot and the 6-pin PCIe power cable (12V). The line is hovering right at 150 watts, though we definitely see some spiking above that to 160 watts with an odd hit above 165 watts.

There is a nearly even split between the power draw of the 6-pin power connector and the motherboard PCIe connection. The blue line shows slightly higher power draw of the PCIe power cable (which is forgivable, as PSU 6-pin and 8-pin supplies are generally over-built) while the white line is the wattage drawn from the motherboard directly.

Below that is the red line for 3.3V power (only around 4-5 watts generally) and the green line (not used, only when the GPU has two 6/8-pin power connections).

View Full Size

Rise of the Tomb Raider (1080p) power draw, RX 480, Click to Enlarge

In this shot, we are using the same data but zooming on a section towards the beginning. It is easier to see our power consumption results, with the highest spike on total power nearly reaching the 170-watt mark. Keep in mind this is NOT with any kind of overclocking applied – everything is running at stock here. The blue line hits 85 watts and the white line (motherboard power) hits nearly 80 watts. PCI Express specifications state that the +12V power output through a motherboard connection shouldn’t exceed 66 watts (actually it is based on current, more on that later). Clearly, the RX 480 is beyond the edge of these limits but not to a degree where we would be concerned.

View Full Size

The Witcher 3 (1080p) power draw, RX 480, Click to Enlarge

The second game I tested before the controversy blew up was The Witcher 3, and in my testing this was a bigger draw on power than Rise of the Tomb Raider. When playing the game at 1080p it was averaging 155+ watts towards the end of the benchmark run and spiking to nearly 165 watts in a couple of instances.

View Full Size

The Witcher 3 (1080p) power draw, RX 480, Click to Enlarge

Zooming in a bit on the data we get more detail on the individual power draw from the motherboard and the PCIe 6-pin cable. The white line of the MB +12V power is going over 75 watts, but not dramatically so, while the +3.3V power is hovering just under 5 watts, for a total of ~80 watts. Power over the 6-pin connector goes above 80 watts here as well.


June 30, 2016 | 03:00 PM - Posted by Searching4Sasquatch (not verified)

Holy. Shit.

Can't wait to see the carnage on YouTube or Facebook soon.

June 30, 2016 | 04:42 PM - Posted by ben capizzo (not verified)

you do know the gtx 960 did this even worse with spikes past 225 watts?

http://www.tomshardware.com/reviews/nvidia-geforce-gtx-960,4038-8.html

gee, why was there no "carnage" or reporting back then? Nvidia bias in gaming media?

June 30, 2016 | 05:03 PM - Posted by Anonymous (not verified)

The GTX 960, in general, did not do this, only the ASUS Strix version, and even then it had power spikes that were short (i.e. less than 1ms) as opposed to this card which has the more serious problem of continuously over-drawing. When the GTX Strix did it, it also pulled the over-draw off the external PCI-E connector. So the two issues aren't really the same. This is much worse.

June 30, 2016 | 05:38 PM - Posted by Bob Jones (not verified)

LOL, typical nonsense and wildly jumping to whatever conclusions favor Nvidia without any evidence of them.

Tom's only tested two 960's for PCIE slot power draw. These were the only two 960's tested in this manner anywhere on the web. One of the two failed, a 50% rate.

Hope that helps.

Also these were AIB 960's, whereas we're dealing with reference 480's. Most people are saying AIB 480's will not have this issue (due to at least having 8 pin connectors). So if AIB 960's did? How much worse might the reference 960's have been? The point is we're comparing apples and oranges here, and the discrepancy should favor the 960.

June 30, 2016 | 05:43 PM - Posted by Bob Jones (not verified)

And no, wow, you're just wrong on wrong.

There's no avg draw from the PCIE slot presented for the Strix 960. However, judging by the graphs, the average likely exceeded 75 watts.

There's no proof constant average is more damaging than spikes. The reverse is likely true IMO, even Ryan Shrout admitted the 960's spikes over 225 were concerning.

And then you literally just made this up

"When the GTX Strix did it, it also pulled the over-draw off the external PCI-E connector. "

First of all there's no external PCI E connector.

Second, I assume you meant the 6 pin plug. Still abjectly false, the chart in question is the strix 960's draw FROM THE PCIE SLOT ONLY. There's no mention at all of the external plug.

June 30, 2016 | 06:27 PM - Posted by Allyn Malventano

Nope. (average line)

July 2, 2016 | 05:24 PM - Posted by Stefem (not verified)

You lack even basic knowledge of electricity, while a conductor can sustain spikes and even short burst of current it's not equally tolerate to continuous over current.

That said, I don't see the RX 480 at stock clock to be a big treat for motherboards as rules for conductor sizing contemplate a safe margin, but still, specs should never be violated anyway.
I'm instead concerned in overclock where it goes much more out of specs.

As for the rest, Allyn proved you are even ignoring tests results

July 4, 2016 | 04:01 PM - Posted by Anonymous (not verified)

There actually is a trend of motherboards substantially undersizing the PCIe power delivery traces as a cost-saving measure. This was (maybe still is?) common across much of Gigabyte's range, and on lower-cost models from Asrock. I think Asus has been pretty consistent about solid power delivery quality.

A big warning sign is an extra power connector on the board for the PCIe slots. On a >$300 board this really might be for extra power for high-end configurations, on a <$150 board it's because it's needed for stability.

July 4, 2016 | 04:13 PM - Posted by Anonymous Nvidia User (not verified)

Here's another graph from the same review. Show distribution of power over 1 second.

http://media.bestofmicro.com/ext/aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9...

There are maybe 3 peaks on PEG that are above 75watts over the entire duration. The majority of time it is well below maximum.

July 4, 2016 | 06:34 PM - Posted by Jeremy Hellstrom

Bloody hell that is an awful graph!  Data seems fine but the ex-desktop publisher in me is weeping tears of blood and vomit.

June 30, 2016 | 06:04 PM - Posted by Anonymous (not verified)

it says 55W average, meaning that is in the safe zone.

July 1, 2016 | 04:41 PM - Posted by Anonymous (not verified)

http://www.pcper.com/reviews/Graphics-Cards/Power-Consumption-Concerns-R...

Your lies got debunked.

Stop trying to defend a crap company like AMD.

July 1, 2016 | 07:27 PM - Posted by Anonymous (not verified)

AMD crap company? Lol you guys are shooting you're self in the foot. 2-3 years you will be buying the GTX 3080 for same price as a car. Mindless people.

July 1, 2016 | 11:03 PM - Posted by hJ (not verified)

AMD's current and future position is the responsibility of AMD and their ability to execute with good products. Companies don't die for being called "crap", they die because they *are* "crap". If we're at the point where we need to tiptoe and speak with carefully chosen words then AMD's not on pathway to recovery anyways.

July 3, 2016 | 09:19 PM - Posted by JB (not verified)

NVIDIA cannot charge whatever they want even if AMD goes away.

They need you to keep buying a new card at whatever tier you buy every year or two, or they go out of business.

Why do you think intel CPUs cost $200-$300 for the most part? Competition from AMD? Not.

July 1, 2016 | 07:31 PM - Posted by Anonymous (not verified)

You forgot when Nvidia release a driver twice that fried a lot of 700 and 900 cards.

July 1, 2016 | 08:02 PM - Posted by Anonymous (not verified)

[citation needed]

July 1, 2016 | 09:23 PM - Posted by arbiter

Spikeing to 225watts is 1 thing. Most boards and PSU's can handle it. When its a a constant draw of near that is where problem comes in. 225spike won't raise temps much and let them come down, staying at 200watts draw means parts get hot and stay hot and long term will shorten their life a ton.

June 30, 2016 | 03:08 PM - Posted by jabbadap (not verified)

How about crossfire?

June 30, 2016 | 03:17 PM - Posted by zMeul (not verified)

you didn't take into account that people will put these budget cards on budged mobos - you tested this issue on a 500+$ mobo, of course it will be overbuilt
please go and get a cheap mobo that normal people would use and re-do your tests

June 30, 2016 | 03:30 PM - Posted by Anonymous (not verified)

Good point!

June 30, 2016 | 03:34 PM - Posted by Everton (not verified)

This ^.

And also there are cheap crossfire capable motherboards too.

June 30, 2016 | 03:40 PM - Posted by Ryan Shrout

"For our part, we are going to be plugging the Radeon RX 480 into a couple of older platforms and running it in some “bad case” scenarios…just to see what happens."

:)

June 30, 2016 | 03:46 PM - Posted by zMeul (not verified)

"older platforms" doesn't mean less overbuilt - cheap mobo means cheap mobo
please consider my suggestion

June 30, 2016 | 04:22 PM - Posted by Ryan Shrout

How does an H170 board sound?

June 30, 2016 | 04:52 PM - Posted by Anonymous (not verified)

Sounds awesome Ryan! I swear these kids are jackasses who don't actually READ and want to come out sounding important with "their" idea when you obviously suggested it before hand.

But yes, please do test this. I was considering buying this RX 480 but decided to go with an overclocked 1070 (When I could buy the thing) because I was originally going to Crossfire.

Now I'm glad I didn't do that.

June 30, 2016 | 05:28 PM - Posted by Anonymous (not verified)

how does a b150 sound?

June 30, 2016 | 05:39 PM - Posted by Anonymous (not verified)

It sounds too new and not low-end enough. If you want new it should be H110. But ideally it should really be H81. <$50 mobo in any case. Otherwise, you're not really testing anything.

June 30, 2016 | 07:45 PM - Posted by Anonymous (not verified)

Even as a budget gaming system, such a cheap motherboard breaking under stress really shouldn't be a surprise.

June 30, 2016 | 08:33 PM - Posted by Anonymous (not verified)

That's simply not true. Even cheap mobos can handle PCIe spec, at least the ones that are PCIe certified (=any board from any known manufacturer). Cheap mobos not being able to handle loads that are well above spec probably isn't a surprise, but then that's why the specs exist in the first place. And that's exactly what should be tested.

June 30, 2016 | 09:08 PM - Posted by Anonymous (not verified)

I have to second the H110, as most of the "budget VR ready builds" that I have seen are calling for this mobo. Given that this card is heavily marketed as a VR ready card, I feel it would be a fair test.

June 30, 2016 | 06:14 PM - Posted by Anonymous (not verified)

It would also be interesting to see testing on a server board like the Supermicro X11SAT which allows you to hard limit power to the PCI-E slots to 75w (default) or some other arbitrary value via the BIOS (PEG2 Slot Power Limit Value & PEG2 Slot Power Limit Scale).

June 30, 2016 | 08:05 PM - Posted by Allyn Malventano

Such a BIOS limit likely plays into the negotiation that takes place during boot. I wouldn't think that board would have a current regulator just to enforce that limit though. It would probably simply tell the card to not exceed the new limit (and rely on the card to do so).

June 30, 2016 | 06:53 PM - Posted by Anonymous (not verified)

How about a Z87?

I just cancelled my order for an RX480 due to only running a 600w PSU and 4670k in a ASRock Z87M Extreme 4.

July 1, 2016 | 01:22 AM - Posted by Boibo (not verified)

And why would a 600w psu be a problem? Im running a 6700k and fury x on 750, and thats overclocked with no problems.

July 1, 2016 | 02:30 AM - Posted by Anonymous (not verified)

Any z-series board, especially an "extreme 4", should be over built quite a bit. If you paid for a z-series board then you probably wasted your money if you aren't overclocking. A 600 watt power supply should also be fine. Anyway, a board with only a single 6-pin connector is not an overclocking board. If you want to overclock, you should probably wait for a non-reference board with an 8-pin or two 6-pin connectors.

July 1, 2016 | 10:18 AM - Posted by Anonymous (not verified)

4670K and 280X on a quality 450 Watt power supply here, working fine and I'm measuring a max of 330 Watt at the wall. So you should be fine to run two RX 480's in fact :)

July 2, 2016 | 03:32 AM - Posted by Anonymous (not verified)

"How about a Z87?

I just cancelled my order for an RX480 due to only running a 600w PSU and 4670k in a ASRock Z87M Extreme 4."

MSI Z97 Gaming 7
4790k
R9 390
850 pro 250GB
Soundblaster Z
Coolermaster G650M PSU

No problems

The only motherboard problem I have had was with an ASRock 990FX Extreme 3 and an AMD8350. When it came out ASRock claimed it was more than good enough for the 8350, a year later they downgraded the motherboard on the support sheet. The motherboard was under specified power wise for the chip.

In its early life, before I knew this, I tried overclocking and was throttled at 4.5GHz so I ran with stock clocks.
One evening, when the motherboard was 3 to 4 years old, I saw the VRMs glow, smoke and die.

June 30, 2016 | 07:25 PM - Posted by Anonymous (not verified)

Sounds way too new -

core 2 duo cheapo -

amd 970 like asro'k 970 w phenom 2 X4 and 8370 type socket

How can I populate 20 older systems with this greatly inexpensive card if it's going to fry them ?

July 1, 2016 | 01:08 AM - Posted by Anonymous (not verified)

It might be interesting to see how a modern card performs with a 10 year old core2duo.

July 1, 2016 | 11:43 AM - Posted by Anonymous (not verified)

Well we've been running 970s in core 2 duos and core 2 quads and the customers are very pleased with the results, and in the amd 970 platforms, which also handle the 970 gpu just fine.

June 30, 2016 | 07:54 PM - Posted by Robert P (not verified)

Ryan, do you have the old Asrock Z68 or Z77 boards with the 4 layer PCB to test on?

July 1, 2016 | 12:23 AM - Posted by NerfDog1 (not verified)

How about a dell?

July 1, 2016 | 04:59 AM - Posted by Anonymous (not verified)

Please try older aging chipsets with PCIe 2.0 x8 CFX.

  1. x48 (LGA-775)
  2. P55 (LGA-1156)
  3. x58 (LGA-1366)

https://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_chipsets
https://en.wikipedia.org/wiki/List_of_Intel_chipsets#LGA_1156
https://en.wikipedia.org/wiki/List_of_Intel_chipsets#LGA_1366.2C_LGA_201...

These correspond with aging yet capable and non bottlenecking CPU still currently in usage. Such as Q9550, i5/i7 760/860, i7-920-950.

July 1, 2016 | 05:18 AM - Posted by zMeul (not verified)

here: https://youtu.be/rhjC_8ai7QA
confirmed case the RX480 messes with cheapo mobos

July 1, 2016 | 06:17 AM - Posted by JohnGR

Nice video, but I think it's not cheepo mobos, but old mobos. He says it in the video. Any new motherboard, even the cheep ones, will have no problem. A motherboard from 2006, it could have. He was testing an AM2 motherboard. Not exactly new.

July 4, 2016 | 08:54 AM - Posted by Anonymous (not verified)

Cheep mobos are for the birds.

July 1, 2016 | 11:11 AM - Posted by Anonymous (not verified)

I would suggest testing on B150M and H110M based motherboards as many of the gamers (in my country anyway) are planning to pair the RX 480 with these boards.

Typical upgrade scenarios I am seeing:

- Intel Core i3-6100 on an H110 or B150 motherboard with RX 480
- AMD K-series APU on a budget FM2+ motherboard with RX 480

(BTW, really appreciate your work Ryan and thanks for sharing this information)

July 3, 2016 | 09:00 AM - Posted by Jet Fusion

I guess it will make different sounds when you hit it with different items with different force applied.
The board will sound allot more different when you hit it with different items with different forces applied under water too.
Hard question to answer really.

July 1, 2016 | 05:07 PM - Posted by Alamo

if you want old and cheap get some AM3 with FX 9570 CPU.
these boards were out in like 2011, and were like 50-80$, the pci-e 2.0, if the 480 doesnt kill the mobo the cpu would :D

July 2, 2016 | 01:00 PM - Posted by Anonymous (not verified)

Can you now test the 950 and 750 series please.

July 5, 2016 | 10:45 AM - Posted by Anonymous (not verified)

No idea how credible this guy is but he claims that he run into some problems.https://www.youtube.com/watch?v=rhjC_8ai7QA

June 30, 2016 | 03:47 PM - Posted by remc86007

I'm a little concerned with this since I've ordered the XFX model with the factory overclock of 1328MHz. I'm assuming the card is binned to hit that frequency, but since your card was pulling a crazy amount more power to gain just a few MHz, do you (or anyone) know if my card will pull way over 150 watts just to hit its factory overclock?

I haven't found any reviews of this specific card, but I'll report back once mine arrives.

June 30, 2016 | 06:56 PM - Posted by Anonymous (not verified)

As my post above I just cancelled my very own order for that exact same card.

July 1, 2016 | 11:42 AM - Posted by remc86007

Haven't checked power draw yet, but with a slight bump in fan speed my card will stay pegged at 1328MHz during furmark with no problem.

June 30, 2016 | 03:54 PM - Posted by Undercover Nerd (not verified)

Could I leave this post here? GTX 750Ti powered by PCI-E only.

http://www.overclock.net/t/1604477/reddit-rx-480-fails-pci-e-specificati...

Tom's Hardware GTX 750ti Review:
http://www.tomshardware.com/reviews/geforce-gtx-750-ti-review,3750-20.html

From my understanding, there are other GPUs in the market over the years that draw more than 75 watts from the PCI-E.

June 30, 2016 | 04:16 PM - Posted by Allyn Malventano

Power delivery is an inherently noisy thing, especially with switching DC regulation at play. Note that the average power (and assumed current draw) for those cards remained within the limits of the spec. While that Toms review specifically stressed 'how important fast measurements are', instantaneous changes / spiking due to switching voltage regulator phases are not truly representative of the average. Further, extremely brief spikes like this are *easier* for a PSU to handle, because they are more easily absorbed by the PSU output capacitors (caps *love* opposing instantaneous changes in voltage) and likely wouldn't impact the PSUs voltage regulation at all, as such a circuit would employ filtering similar to what we do with our capture output.

That said, extreme spikes can place additional instantaneous load on the motherboard traces *between* that graphics card and those PSU output capacitors, but again, these are so brief that they shouldn't be melting or overheating traces or connectors. I'd be more worried about excessively noisy power circuitry inducing additional noise to adjacent data lines, but motherboard makers are usually careful to keep those separated / perpendicular as to minimize power trace noise cross-talk with data lines. It's the average (over a second or so, depending) current draw that would be responsible for excessive heat build up and damage. This is actually confirmed by the motherboard makers we spoke to in preparation for this article (page 2).

June 30, 2016 | 09:22 PM - Posted by Anonymous (not verified)

I think the danger from those extreme spikes isn't about heat damage; it's about the sudden increases in voltage and current causing a breakdown of the insulation materials that keep the voltage where it should be. If the voltage increases high enough, and abruptly enough (say on the microsecond level), they can physically damage the plastics and fibers and resins that make up the whole electrical insulation system. They're like electrical karate chops.

A material that might stand up to 4VDC all day long may completely break down if you hit it with 5VDC with a 1.2uS impulse front.

June 30, 2016 | 09:24 PM - Posted by Anonymous (not verified)

Edit - make that a 1.2µS pulse front.

(I just learned the alt code for the "micro" - ALT+0181)

July 1, 2016 | 02:48 AM - Posted by Anonymous (not verified)

Doesn't the voltage go down (droop) when the current spikes? I think the main concern would be continuous overload of traces, although I would expect these to be over designed on most boards. Also, the pins on the card for the pci-e slot have a very smal area of contact, but they generally are gold plated. The pci-e power connectors have a much larger area of contact and can probably carry significantly more power than what the spec says. Any damage to insulation would probably be heat related rather than voltage related. Even over the long term though, motherboard power traces are probably large enough that they may not take any damage.

July 1, 2016 | 03:32 AM - Posted by Allyn Malventano

Yes, as current spikes up, voltage would drop further. We didn't focus on that for the same reason we are not overly concerned with the spikes.

July 1, 2016 | 03:34 AM - Posted by Allyn Malventano

I see where you're coming from, but you'd have to get into hundreds of volts to start breaking down insulation. Also, the current is what is spiking, which means voltage would be dipping further due to the brief peak loads. That's the opposite effect of what you're going for in your thinking.

July 1, 2016 | 04:26 AM - Posted by dragosmp (not verified)

The current ripple is not a big problem as long as the voltage ripple is low or non-existent. A current spike might be just a Y-cap or a filtering cap draining - that's its job, provide the peaky current spikes while keeping the voltage stable within certain limits.

My last point, fwiw, is that the wattage numbers floating around are obtained by multiplying instantaneous U*I. This is true for steady-state DC, for transients the phase between the AC and DC should be considered (good luck explaining that in a review) P=U*I*cos(phi)

AMD kinda made a controversial choice; however I think it's borderline on the good side if the user doesn't put this card in a cheap mobo where likely other controversial choices have been made to reduce the cost by 5c. AMD will have a challenge explaining it. Your article @pcper helps this a lot and as an electrical engineer I appreciate the way you paint quite a complex issue. I guess I'd suggest you had somebody make a nicer wiring as you measure high frequency stuff and your setup just isn't inspiring confidence in the peak values; on average it's likely fine.

July 1, 2016 | 05:36 AM - Posted by Anonymous (not verified)

One other issue you might encounter with very high instantaneous load spikes is when NOT using a standard ATX supply. It is becoming more and more common to use an outboard AC-DC converter at a fixed voltage (usually 19.5V, sometimes 12V) and an internal DC-DC board to provide 12V, 5V, 3.3V, etc. This is happening both in DIY chassis using HDPLEX and similar DC-DC converts, and in commercial machines like the Asus RoG G20.

The problem many are encountering is that because these use laptop PSUs to provide the base voltage, they are very vulnerable to 'spikey' loads, the R9 Nano in particular. Laptop PSUs are designed to charge a laptop battry, and power a laptop with its own internal load-management circuitry. When used with a 'dumb' DC-DC converter, they trigger overload protection extremely quickly, or just fry, when faced with rapidly varying power draw. Using more robust outboard PSUs works fine, but defeats the intent of minimising size that triggers the use of outboard PSUs in the first place.

This is the primary reason why SFF PCs that use outboard PSUs (rather than internal SFX or FlexATX) tend to use short GTX 970s rather than R9 Nanos, even though they do not achieve the same theoretical performance.

If the RX 480 load spikes in the same way as the R9 Nano, that effectively rules it out for such SFF PCs, regardless of performance or availability of a short PCB version.

July 1, 2016 | 02:36 AM - Posted by Anonymous (not verified)

It's the average that is important... Not spikes.

July 1, 2016 | 02:52 AM - Posted by Anonymous (not verified)

It isn't just the average. It is how long it remains overloaded. A few seconds over the limits might not be an issue, but if it runs over the limits for a long period of time, it could cause damage. The spikes we see with the 750 are probably so short that they will not cause any significant heating of wires or connectors.

June 30, 2016 | 03:59 PM - Posted by Anonymous (not verified)

Great reporting there sir.

June 30, 2016 | 04:15 PM - Posted by tatakai

still more questions

what controls the power draw. Will the motherboard just give power it can't handle? If the motherboard is freely dishing out the power because it can handle it, eg. high end board, would lower end boards not put out the power and result in readings lower than the reviewers see?

What about all the GPUs before this that have exceeded the 75W limit? eg. some 960 cards, some 750ti cards (probably more common with cards that dont have a 6 pin or 8 pin)

Exceeding the slot limit is probably rarer than exceeding the plug limits. Tests show 1080s, 980tis etc going past the total power they can support with the slot and PCI-e plug config often and definitely with overclock.

Ultimately, is this even an issue at all in light of the above. I can see the slot thing being considered an issue, though I am not sure going that few watts above assumed spec would damage the pins as suggested. Your contact says the motherboard itself should be able to handle it, which I would assume is the case for boards with more than one x16 slot.

June 30, 2016 | 04:24 PM - Posted by Allyn Malventano

Power draw is controlled by the DC-DC converter phases on the card itself. The +12V from the slot is typically right off of the common +12V bus that runs throughout the motherboard (from the PSU).

We went back and looked through a lot of our results, and no other card exceeded the 75W limit in stock form, even if they exceeded the 6-pin and 8-pin limits when overclocked. PSUs are able to handle exceeding spec on the 6/8-pin links as opposed to the relatively thin traces / connector pins related to the PCIe slot itself (as noted by the droop seen in our testing), which is probably why prior cards were designed to be more careful about the slot spec than the 6/8-pin spec.

The issue is when the sustained average draw exceeds 5.5A on the slot, because the slot is not as able to handle exceeding that limit compared to inputs directly from the PSU.

June 30, 2016 | 05:07 PM - Posted by Bob Jones (not verified)

I guess you clearly happened to not test any 960's/Nvidia cards in the past then?

http://www.tomshardware.com/reviews/nvidia-geforce-gtx-960,4038-8.html

June 30, 2016 | 05:12 PM - Posted by Allyn Malventano

As explained in this write-up, unfiltered output is not representative of true power draw. Switching DC-DC converters will cause current spikes, but motherboards are designed to tolerate those. As the motherboard makers we asked informed us, instantaneous spikes (within reason) can be tolerated. It is the sustained (average / filtered measurement) current draw that is the issue, as that is what leads to excessive heat and possible damage. In your linked review, average draw was *well below* the limit, while the average draw of the 480 was above (>50% when overclocked) that same limit.

June 30, 2016 | 05:25 PM - Posted by Bob Jones (not verified)

This is the same BS Nvidia fanboy on reddit are saying. Where are you getting any average at all for the Strix 960? THERE ISNT ONE. Answer me this, what is the average power draw, in integer form, of the 960 at Tom's testing? You wont have an answer because there isn't one, so why would you imply there is if you arent being intentionally dishonest?

Look here as well https://forums.overclockers.co.uk/showpost.php?p=29719176&postcount=10420

The overall power profile of the struix 960 from the PCIE is pretty clearly not better than the 480, if not worse.

I also reject this sudden, all too convenient cause it favors Nvidia with the data we have, notion that spikes dont hurt things. What is the purpose of all my surge protectors then?

I'd guess the "noisy" and "spikey" profile of the 960 is far more damaging than a constant barely over spec load. Especially since tolerances are likely built in that can handle over spec, that's just good/common design. And not the +8%, but individual motherboard improvements.

But yeah, your car is more damaged by a 5 minute trip to the store than a 100 mile drive. It's not constants that should damage things so much as spikes, be they in temperature or what have you.

June 30, 2016 | 05:33 PM - Posted by Bob Jones (not verified)

Since we cant edit, just like to note when asking for the average of the strix card, I'm asking for the average draw from the PCIE slot since thats where the discussion lies obviously. There is no such average given, so claiming it to be below 75 (it certainly looks above by eyeballing the graphs) is careless at best, disingenuous at worst. But weirdly it's the same kneejerk reaction I got on reddit...

June 30, 2016 | 05:36 PM - Posted by Allyn Malventano

Toms power charts have an average line across their results (dotted yellow line). That very line is actually in the very first two pictures in your link.

480 is exceeding the average, while Toms testing showed a bunch of cards exceeding the max, which is in my (and motherboard makers) opinion not a major cause for concern. The average sustained draw *is* the cause for concern, especially given the noticable voltage droop present at that >50% over the spec current draw when overclocked.

Oil starvation of bearings on cold engine start is not the same thing as an instantaneous current spike. A more correct analogy would be circuit breakers in your house, which by the way are rated to break the circuit at a *sustained* load, otherwise, your vacuum cleaner would trip your breaker each and every time you tried to turn it on. The breaker is there to protect wiring from overheat, similar to the PCIe spec protecting traces and connectors supplying current to the card.

Full disclosure - I've rebuilt several cars and I am also a retired Navy Electronics Technician who not only taught electronics but troubleshot and repaired reactor control systems while at sea. Trying to pull a misplaced car analogy over on me isn't quite going to work.
But hey, what do I know... :)

(other than to say that 'BS Nvidia fanboy' on reddit is probably correct).

June 30, 2016 | 08:54 PM - Posted by Anonymous (not verified)

excuse my naïve question, but are the traces on the RX 480 up to the task of carrying that amount of power via the PCI slot?

July 1, 2016 | 12:36 AM - Posted by Allyn Malventano

I haven't looked closely at this specific card, but the +12V inputs are typically almost immediately tied to a (thicker) +12V power trace. The weak spot is the pins in the slot/connector, and any longer (thin) traces that happen to be running across the motherboard, if any.

July 1, 2016 | 04:07 AM - Posted by Anonymous (not verified)

Thanks Allyn.

June 30, 2016 | 11:13 PM - Posted by Activate: AMD (not verified)

I love the obviously biased fanboy flaming a well respected and extremely competent tech journalist who has spoken to mobo manufacturers about being biased. Keep doing what you're doing allyn.

July 1, 2016 | 10:08 AM - Posted by Termiux

Agreed keep the good work Allyn

July 1, 2016 | 10:10 AM - Posted by Termiux

Daaaamn buuurn, honestly your expertise is what differentiates you guys from other reviewers. Basically no other reviewers do as great and detailed job as you guys keep it up!

June 30, 2016 | 05:17 PM - Posted by Bob Jones (not verified)

Seriously, I'd like to know EXACTLY what cards you supposedly had old data on that were good?

And here you're also conflating things, as you focus on AMPS in your article, which seems to go higher than watts. In watts the 480 barely exceeds 75 so it seems likely that past cards would no either.

I'd like you to test, including non-cherry picked Nvidia cards (in other words dont pick all the low power ones you think will pass) including known offenders like the Strix 960 and other 960's, in the EXACT form of your article. Including checking AMPS not watts, as you did for the 480, and see if any exceed spec.

I realize I'm just demanding this and that, but seriously, a lot more insight on this would be nice than just "yeah we looked at old stuff and it's all good". That actually tells us nothing specific.

Also I just realized one of your most egregious slants, your table that list how far 480 is out of spec on Amps actually uses the OVERCLOCKED value for the percentage, without noting this! That's kind of unbelievable to be put in any kind of official looking chart! Every component overclocked is likely out of spec!

June 30, 2016 | 05:38 PM - Posted by Allyn Malventano

> And here you're also conflating things, as you focus on AMPS in your article, which seems to go higher than watts. In watts the 480 barely exceeds 75 so it seems likely that past cards would no either.

Did ohms law suddenly stop working? Also, the spec clearly states limits in AMPS.

June 30, 2016 | 07:11 PM - Posted by Jeremy (not verified)

Allyn, I've been doing some reading and some people are claiming that, according to the PCIe Specification, it's possible that it's actually the motherboard that sets the maximum power value that the slot it able to pull. Do you have any thoughts on this?

Link: http://composter.com.ua/documents/PCI_Express_Base_Specification_Revisio...

Specifically page 527, Section 6.9:
"Power limits on the platform are typically controlled by the software (for example, platform firmware) that comprehends the specifics of the platform such as: 15 Partitioning of the platform, including slots for I/O expansion using adapters Power delivery capabilities Thermal capabilities This software is responsible for correctly programming the Slot Power Limit Value and Scale fields of the Slot Capabilities registers of the Downstream Ports connected to slots. After the value has 20 been written into the register within the Downstream Port, it is conveyed to the adapter using the Set_Slot_Power_Limit Message (see Section 2.2.8.5). The recipient of the Message must use the value in the Message data payload to limit usage of the power for the entire adapter, unless the adapter will never exceed the lowest value specified in the corresponding form factor specification. It is required that device driver software associated with the adapter be able (by reading the values of 25 the Captured Slot Power Limit Value and Scale fields of the Device Capabilities register) to configure hardware of the adapter to guarantee that the adapter will not exceed the imposed limit. In the case where the platform imposes a limit that is below the minimum needed for adequate operation, the device driver will be able to communicate this discrepancy to higher level configuration software. Configuration software is required to set the Slot Power Limit to one of the 30 maximum values specified for the corresponding form factor based on the capability of the platform. The following rules cover the Slot Power Limit control mechanism: For Adapters: Until and unless a Set_Slot_Power_Limit Message is received indicating a Slot Power Limit 35 value greater than the lowest value specified in the form factor specification for the adapter's form factor, the adapter must not consume more than the lowest value specified. PCI EXPRESS BASE SPECIFICATION, REV. 3.0 528 An adapter must never consume more power than what was specified in the most recently received Set_Slot_Power_Limit Message or the minimum value specified in the corresponding form factor specification, whichever is higher."

June 30, 2016 | 07:24 PM - Posted by Allyn Malventano

They are referring to the power limit switch, in this context it would be a GPU asking for high power mode during boot, where its allowed power shifts from 25W to 75W. Note that the spec lists both 25W and 75W under the x16 column.

June 30, 2016 | 06:07 PM - Posted by tatakai

>Power draw is controlled by the DC-DC converter phases on the card itself. The +12V from the slot is typically right off of the common +12V bus that runs throughout the motherboard (from the PSU).

common 12V bus. So the card is pulling what it needs from this bus, but I would assume the motherboard itself regulates the total power available? The motherboard won't tell the card "take this much", but the motherboard will tell the card "you are taking too much, get lost."

The question seems to be whether the slot pins can physically handle the extra power because it looks like the motherboard is wired to provide it (I assume having multiple slots means being able to deliver 75W to each). The problem would be the contacts getting too hot and burning out then? would this average draw exceed some known limit for the slot then. The current they can withstand should be known. Max temps should be known. somewhere.

June 30, 2016 | 04:17 PM - Posted by Anonymous (not verified)

"nor have I heard of any other reviewers indicating as much"

HFR did the same tests and come up with same results.

http://www.hardware.fr/articles/951-9/consommation-efficacite-energetiqu...

About voltages : "Elle va à ce niveau bien au-delà de la spécification qui est de 5.5A. Dans Battlefield 4, nous mesurons 6.92A par défaut et 7.10A en 'Uber'."

June 30, 2016 | 04:28 PM - Posted by Allyn Malventano

They noted the higher draw, but Ryan was referring to instabilities *because* of that higher draw.

June 30, 2016 | 04:23 PM - Posted by pdjblum

Great write up. It boggles my mind to think that AMD and their graphics card partners were not completely aware of this issue. Actually, it is impossible for me to believe they didn't know. Didn't Scott do a thorough insider review. He would not have missed this. Assuming they knew, how could they have let this happen? I would think the board partners would not have wanted to release hardware that was out of specification. So many people had to have known and so many people had to have either swept it under the rug or been in denial or I don't know what. Furthermore, Raja is a very honest and capable leader. He stated yesterday during the interview with Ryan that it was all about the customer and their experience. I cannot believe he let this get by. Customers certainly should not have to worry about whether their shiny new card is going to damage their mobo.

June 30, 2016 | 04:36 PM - Posted by remc86007

I agree. Something weird is happening here. They've even stated that they were "certified" by a third party.

June 30, 2016 | 09:35 PM - Posted by Anonymous (not verified)

I think they were talking about PCI-SIG, who wrote and copyrighted the PCIe standards that we all use and love. In order for AMD or their board partners to be able to put the PCI-Express logo on their product boxes and in their advertising and be able to proclaim PCIe compatibility, the card has to be certified by PCI-SIG, and I'm pretty sure they do their own testing for that (though they might use an impartial third party.) So PCI-SIG has to take the card and put it through their standardized testing regimen. If it doesn't pass all their requirements, they don't get the certification.

If PCI-SIG did indeed test it and certify that it is compliant, then either something changed between their certification card and the retail shipping cards (which would be very bad for AMD) or there's some other factor going on that PCI-SIG didn't think to test for (which would likely lay blame on some other component than the card).

In this particular discussion, the only other option that I can think of is that PCI-SIG didn't test it at all and just sent it back with a certification, or they tested it, saw it was out of spec, decided it was within acceptable parameters regardless, and sent it back with a certification. If either of these turns out to be the case, it'll probably mean that a PCI-SIG certification will mean about as much as a WHQL certification anymore.

June 30, 2016 | 04:33 PM - Posted by remc86007

I'm completely ignorant as to how these things work. Is the distribution between the PCIE power and the six-pin controllable by driver or could a firmware update change this, or is it just hardwired to act this way?

June 30, 2016 | 04:58 PM - Posted by Bob Jones (not verified)

Too me, as somebody with a 480 in the mail on the way, this is the million dollar question...like so many things about this issue, I dont have a clear answer. One German site supposedly said or implied it isn't fixable short a PCB revision. Others have hinted a bios update or driver update could do it. I dont know.

I suspect it's not really an issue in the real world, no review sites had reliability issues for example. I also suspect TONS of past cards do the same thing and just werent tested or publicized (check the Strix 960 link I keep posting) Still, I'm kinda OCD and this is exactly the type of thing that will probably scratch at my brain and prevent me from enjoying my shiny new card :(

It seems to me very worst case, AMD could probably just issue a driver that downclocks the thing 5 or 10% and problem solved. Personally I wouldn't mind the performance loss since it's still like 10X as powerful as my old card. But some would.

June 30, 2016 | 06:26 PM - Posted by Allyn Malventano

That Strix 960 review you keep posting shows the sustained power draw is not actually an issue.

June 30, 2016 | 04:44 PM - Posted by Anonymous (not verified)

Pcper and Tomshardware shot to the top of my list of respected hardware review sites after this RX480 launch. Thank you for this excellent article.

June 30, 2016 | 04:51 PM - Posted by Bob Jones (not verified)

They shot to the bottom of my list since they completely ignored this:

http://www.tomshardware.com/reviews/nvidia-geforce-gtx-960,4038-8.html

Well actually Tom's didn't completely ignore it, in fact they discovered it, although their language is markedly less alarmist in the Nvidia case. The main issue was there are so many more countless millions of Nvidia fanboys that surfaced this issue in AMD's case, in order to try to hurt AMD and a new product launch. Whereas nobody surfaced this back when 960 did it, because that was an Nvidia card.

June 30, 2016 | 04:57 PM - Posted by Anonymous (not verified)

You're clinging to a single non-reference GTX 960 to keep you warm through this RX480 storm?

Go find results from a reference GTX 960 or a different aftermarket GTX 960 and you will most likely see that this specific case was caused by this ASUS's model and not the GTX 960 as a whole.

This article is about the reference RX480. AMD messed up big time in their attempts to market this as a power sipping single six pin card.

June 30, 2016 | 04:59 PM - Posted by Allyn Malventano

Actually, in that review, the average draw was *below* the limit, and Toms took issue with 'unfiltered power draw'. The catch there is that PCIe devices are specifically prohibited from having high filtering (capacitance) on their power inputs, as noted by the 2000uF limit stated in the PCIe spec. With such a capacitance limit on the card-side, this means PCIe devices with switching supplies (basically all GPUs) will possibly exceed instantaneous power draw of 75W when measured without filtering the output - especially if the average draw is close to the limit. It's just the nature of the beast.

That said, the max spikes (note in the Toms review) do seem rather high compared to other cards, but if the average goes up then spike magnitude will also increase roughly proportionally given the same type of switching.

June 30, 2016 | 05:31 PM - Posted by Bob Jones (not verified)

There is no average power draw from the PCIE slot from the Strix 960 presented anywhere in that article.

There is an overall average I guess, total draw, on the other table, not the same nor where the problem lies.

June 30, 2016 | 05:40 PM - Posted by Allyn Malventano

We don't have a 960 Strix, so unfortunately we can't replicate. If that one card was over the average, then sure it's an issue, but if Toms didn't plot average in that review, neither you nor I can draw any definite conclusion.

*edit* correction, we *do* have one, but no need to test it, you just need to scroll down on Toms is all:

Note the yellow dashed line at 50. This is the same data you were looking at in that 3D chart.

June 30, 2016 | 06:36 PM - Posted by akumaburn (not verified)

Issue at hand is that the 960 average was taken when it wasn't running Metro Last Light at 4K..

So it's somewhat an unfair comparison.. better to compare the stock graph of the 480 under a normal scenario to the 960..

Because that Tom's hardware graph is the NORMAL scenario of the ASUS STRIX 960.

This can be seen by the graph that's next to it that shows that the average power draw was 100 watts.

Asus GTX 960 Strix OC
7W - IDLE
100W - Normal
144W - Torture
130W - Power Target

:)

June 30, 2016 | 06:48 PM - Posted by Allyn Malventano

Is the 100W 'normal' figure not *whole card* (slot and 6/8-pin connector) power draw?

June 30, 2016 | 07:08 PM - Posted by akumaburn (not verified)

Yeah it is whole card.. but my point is the 12 volt reading was also under "normal" circumstances.. it wasn't a torture test like running Metro Last Light at 4K..

June 30, 2016 | 08:08 PM - Posted by Allyn Malventano

Yeah I follow. If the issue blows up much further we can just test that 960 Strix with Metro at 4K, but it really doesn't look like it's going to exceed 75W slot power based on what I've seen elsewhere.

July 2, 2016 | 12:34 PM - Posted by WithMyGoodEyeClosed (not verified)

That test should be done to aquire retroactive data concerning long term average usage effective damage, which is your point anyway and a very important vector to be examined at this stage in time.
This, if the 960 is found to exceed the spec. Of course.

July 1, 2016 | 01:28 AM - Posted by Anonymous (not verified)

I abandoned Tom's hardware a very long time ago. I used to read it a lot maybe back around 2000, 2001 or so. I switched to Anandtech for a long time. These days I mostly just read PCPer and Arstechnica. Anandtech still has some good articles though.

June 30, 2016 | 04:48 PM - Posted by Bob Jones (not verified)

Any ways I could tell right away by the tone of this article this guy is an Nvidia fanboy. Why dont you guys go do testing on the Strix 960 for example, that Toms found was pulling way more than 75 watts from the PCIE slot? Nobody was talking about motherboards burning up back then...hypocrites. Where was Pc Perspective back then? Did they ignore the issue because hordes of Nvidia fanboys didn't surface it like this time?

http://www.tomshardware.com/reviews/nvidia-geforce-gtx-960,4038-8.html

This could be an issue and I am concerned about it, but like I said pretty clear Ryan Shrout is an Nvidia fanboy and biased (here comes his "who, lil ol innocent me?" Spiel, save it, everybody has biases)

June 30, 2016 | 04:57 PM - Posted by Everton (not verified)

So, an AIB model?

June 30, 2016 | 05:48 PM - Posted by Allyn Malventano

That result is right here (with an average of 50), straight from Toms. Motherboard makers have told us that sustained current draw exceeding the spec can lead to damage. Are you going to claim those companies (who typically sell cards from both camps) are also biased now?

Seriously, dude, you're trying way too hard here...

July 2, 2016 | 06:44 AM - Posted by Eugenernator (not verified)

Don't mind me. Just enjoying the show (ノ◕ヮ◕)ノ

June 30, 2016 | 04:48 PM - Posted by NamelessTed

Awesome to see you guys looking into this further. AMD seems pretty adamant that the cards passed testing, do you think these issues are specific to a small % of RX 480s? Did the small handful of sites that could actually measure these things all happen to get bad cards?

Is there even a slim chance a BIOS update to teh card could mitigate some of the issue? Potentially capping pci e lane power to 75W and forcing any extra power needed from the 6-pin? Or is that something that would need to be done with a different board design? Would the only BIOS fix simply be to cap it all around and limit the card's ability?

As for motherboard concerns, do you think a manfucaturer would void a warranty if an RX 480 did happen to ruin the pci-e lane? Also, does crossfire make this issue even worse on the motherboard?

I realize its a lot of questions and you guys probably have a plenty of things you will be testing on this. Hopefully we will find out more from you guys, AMD, and others in the next week.

I guess this kind of stuff is why people always say to wait for the reviews. Why wouldn't AMD just put an 8-pin connector on this thing? Seems like such a terrible choice.

July 1, 2016 | 02:10 AM - Posted by Anonymous (not verified)

I suspect that AMD had this design ready to go, but may have bumped the clock speed a bit shortly before launch pushing it a bit beyond the spec. With how much of the power circuitry is software controlled, I expect that this can be changed with a driver or firmware update. If the pci-e/6-pin allocation can't be changed via software or firmware, then they will probably have to change the boost clock settings. It just wouldn't be able to run at higher boost clocks for the segments that put it over the power limit. I doubt that would change the user experience much. It is over the power limits for some segments, but considering how much the clock speed changes the power consumption, a relatively small decrease could put it under the limit. For any over clocking it looks like it would be best to wait for a non-reference design with an 8-pin or 2 6-pin connectors anyway. A single 6-pin connector is pretty limiting. With how much people have been able to overclock some of these cards though, the on board power delivery and cooling may be fine.

June 30, 2016 | 05:00 PM - Posted by Anonymous (not verified)

ryan that diagram seems to be for PCI Express™ Card Electromechanical Specification Revision 1.1 , you have to be a member to download spec 3.0 https://pcisig.com/specifications. Your conclusion seems flawed without the latest spec.
please correct me if this was in the 3.0 spec docs.

June 30, 2016 | 05:06 PM - Posted by Allyn Malventano

We're not a member, but no additional power pins were added in newer versions of the spec. Since the power delivery portion of the slot has not changed (still only 5 +12V pins), it stands to reason that additional power delivery through the slot is not magically possible. Further, those with access to the newest spec brought up that same 5.5A figure.

June 30, 2016 | 07:30 PM - Posted by MD5 (not verified)

I think that it could be possible to repurpose some of the PCIe data pins to 12v power similar to how Qualcomm Quickcharge 3.0 does with usb. I don't know of any company that has done this but in theory it makes sense.

June 30, 2016 | 08:11 PM - Posted by Allyn Malventano

It's not possible here as the data pins are actively in use by the GPU. Quick charging doesn't need data while doing so.

June 30, 2016 | 08:28 PM - Posted by MD5 (not verified)

You could still use some of the data pins, you wouldn't need all of them. In future GPU's, a vendor could power a GPU through the PCIe slot (up to 300w) using some of the pins and run the GPU at a x8 link and it would work if you had a motherboard that supported it. I think some vendors will try this with PCIe 4.0.

The 300w limit on the PCI-SIG standard doesn't achieve that number through what I am describing, it uses the same 5 12v pins to achieve 300w. At least that is what the standard is describing in the 860 page PDF.

Honestly I would be less worried about the PCIe pins or motherboard traces and more worried about PSU cables in multi Gpu setups, especially with single rail PSU's with motherboards that use only a single 4 pin + 24 pin.

June 30, 2016 | 11:44 PM - Posted by Anonymous (not verified)

GPUs went down in power a bit this generation. I wouldn't be surprised if they climb back up though.n we already have a solution to providing more power to GPUs, and that is direct connection to the power supply. Directly connecting to the power supply is going to be more efficient than passing power through the motherboard.

June 30, 2016 | 09:57 PM - Posted by Anonymous (not verified)

The table (4-1) from the PCIE Electromechanical Specification Revision 3.0

Ryan's second table, 4-2, does not appear in Revision 3.0, but the 75W slot max is defined in the specification.

June 30, 2016 | 06:06 PM - Posted by Searching4Sasquatch (not verified)

But.......but.......GTX 960? Hahahahaha

AMD's new card is rubbish, end of story.

June 30, 2016 | 06:32 PM - Posted by MD5 (not verified)

After this article Raja Koduri probably wants another drink, too bad he has to wait till Vega. You guys really should have given him more bourbon when he was there.

Can you guys please get him back on a stream? Even if it isn't in person(skype).

June 30, 2016 | 06:44 PM - Posted by JohnGR

There are plenty of graphics cards in the market with TDP close to their limits of what power they can get from the PCIe bus and maybe an extra PCIe cable or two. GTX 950 with NO 6pin PCIe cable comes to mind. Add to that the fact that most people overclock their cards and we are living with cards that are out of PCIe bus specifications years now. We just don't know it, we don't have the equipment to test it ourselves to educate ourselves about that.

One more thing to consider is that many companies that have RX480 models in the market also produce motherboards. Not just $500 motherboards, but also $40 motherboards. Until someone sees in their websites bright red letters saying that RX480 is incompatible with a specific motherboard, I don't think there is any real problem here.

That being said, AMD gave again a good excuse to tech press to target a new AMD product. But this time it is AMD's fault. They wanted a card that it is faster than 970 and also use only one 6pin power connector to give a very clear advantage to custom designs. That's why I believe they preferred the 6pin over the 8pin power connector, that's why they didn't lowered the GPU frequency to guaranty that the card wouldn't need more than 150W at stock settings. When will they learn to stop shooting their own feet?

June 30, 2016 | 06:57 PM - Posted by Allyn Malventano

I totally agree. Had they gone 8-pin, even if only to support overclocked power draw within the spec, this wouldn't be an issue at all. Heck, even if they just overdrew on (only) the 6-pin connector it wouldn't even be a story (plenty of cards overdraw on 6/8-pin when overclocked, so singling out this instance would be unfair).

I was surprised to see the 6-pin and slot power tracking so closely on the 480, especially given that other cards we've tested in the past typically have slot power running fairly independent from 6/8-pin power (slot sustained power always safely below the limit).

I'm going to try to find the worst-case slot-only powered card overclocked figures and see if/how far they exceed the spec. I can safely guess that none of them ran so close to the spec in the first place and were capable of +50% over their power target, though.

July 2, 2016 | 07:20 AM - Posted by JohnGR

I decided to have a look
79W without overclocking
https://tpucdn.com/reviews/ASUS/GTX_950/images/power_peak.png

from this review
https://www.techpowerup.com/reviews/ASUS/GTX_950/21.html

June 30, 2016 | 06:51 PM - Posted by Shaun (not verified)

We should l make absolutely sure that none of our CPUs ever draw more wattage than the official specification. Even a watt or two is dangerous and could damage our motherboards *rolls eyes*

June 30, 2016 | 07:01 PM - Posted by Allyn Malventano

CPUs have banks of regulators mounted extremely close to the socket, as well as having over a hundred VSS pins to supply power to the CPU. Motherboard makers specifically design in additional power phases for CPU power handling / overclocking. The same motherboard makers have told us that exceeding sustained power draw on PCIe slots can lead to damage. PCIe slots have 5 +12V pins. Five.

June 30, 2016 | 07:51 PM - Posted by leszy (not verified)

Are you sure? :))

http://www.tomshardware.com/reviews/geforce-gtx-750-ti-review,3750-20.html

June 30, 2016 | 08:13 PM - Posted by Allyn Malventano

Yes, we are sure. Sustained power draw equates to the average line in your linked article. That line is under the limit.

June 30, 2016 | 08:45 PM - Posted by leszy (not verified)

How? Is PCI-E slot limited to 75W. We can see on the chart nearly constant 95W power draw.

July 2, 2016 | 02:25 AM - Posted by arbiter

PCI-e spec says its 75watts. So to be within spec the board only has to support that for consistent draw. Cheap boards that is all they will do is support to maybe 80watts max, expensive boards probably can do a bit more then that no problem and even higher cause things like CF/SLI. The link you provided to a 750ti the avg draw is 60-65watts. A board can handle a draw spike a bit higher then 75 watts but that is only spikes when its a consistent draw over that is where the problem is.

June 30, 2016 | 08:48 PM - Posted by leszy (not verified)

Line is under the limit, only beacause,there are some drop down to 0.

July 1, 2016 | 02:22 AM - Posted by Anonymous (not verified)

Do you guys have a 750 around that you can test with your methodology? It is unclear to me what Tom's graph would look like filtered the same way yours is. Is it roughly equivalent to your raw data? Also, if other cards you have tested had similar issues, would you have noticed them? I don't know how long you have been using your current methodologies.

July 1, 2016 | 02:57 AM - Posted by Anonymous (not verified)

Or 960 I guess. It is hard to post here from an iPhone 5.

July 1, 2016 | 03:38 AM - Posted by Allyn Malventano

960 Strix appears closest to the limit from the NV side. We are going to look into that with more detail tomorrow (today).

June 30, 2016 | 07:44 PM - Posted by Anonymous (not verified)

https://www.reddit.com/r/Amd/comments/4qmlep/rx_480_powergate_problem_ha...

June 30, 2016 | 07:50 PM - Posted by MD5 (not verified)

No one will know if the GPU/Motherboard is requesting more than 75w (using the "Slot Power Limit Value" call) through the PCIe bus until someone has a PCIe bus analyzer and looks at the bytes that are communicated between them. Right now it appears that the Motherboard and GPU are set on a 75w link even though it is pulling more sustained than that.The PCIe spec allows up to 300w through the slot, but no motherboard manufacturers have built a board that supports it. Almost all boards only support 75w. This link doesn't really relate to the topic in any meaningful way because of this.

June 30, 2016 | 08:18 PM - Posted by Allyn Malventano

We've already had motherboard makers tell us that sustained 95W may damage their boards, so while there may be higher options in the Power_Limit message, the existing PCIe slot physical design is still limited to 75W max.

*edit* After researching this further, the 3.0 base spec is referring to 300W cards using the additional power connectors. The 3.0 version of the electromechanical spec sustains the same 5.5A +12V / 75W total slot limits.

June 30, 2016 | 07:47 PM - Posted by leszy (not verified)

GTX 950 is consuming 141W from the PCI-E slot.

[IMG]http://media.bestofmicro.com/2/W/422600/original/01-GTX-750-Ti-Complete-Gaming-Loop-170-seconds.png[/IMG]

June 30, 2016 | 08:11 PM - Posted by leszy (not verified)

And nearly constant 95W. What about damaged PCI-E slot on MB of GTX950 users?

June 30, 2016 | 09:24 PM - Posted by Darkmoon (not verified)

The graphic you posted shows 70 watt average load on the gtx 950, not 95 as you claim. Just look at the horizontal line and the description below.

July 2, 2016 | 02:27 AM - Posted by arbiter

Its closer to 65watts really, but as said many times before SPIKES in power draw are find and can be handled just fine. Problem is when that Avg line is pulled over that 75watts spec that is the problem.

June 30, 2016 | 07:48 PM - Posted by leszy (not verified)

http://media.bestofmicro.com/2/W/422600/original/01-GTX-750-Ti-Complete-...

July 1, 2016 | 12:31 AM - Posted by Allyn Malventano

For the millionth time, this pic shows sustained power draw *within* the limits of the slot.

July 4, 2016 | 12:12 AM - Posted by Anonymous (not verified)

You can't read data can you.... there is no problem here. Good gawd, you AMD fanboys are intolerable.

June 30, 2016 | 08:04 PM - Posted by Anonymous (not verified)

I'm kind of a noob. If I have a midrange mobo and don't plan on overclocking will this really be that big of a deal?

June 30, 2016 | 08:14 PM - Posted by leszy (not verified)

No. It's just damage control campaign, from NV PR team.

June 30, 2016 | 08:23 PM - Posted by Allyn Malventano

'Damage control' that AMD has acknowledged as a valid issue and is working on a fix for? That's one hell of a campaign.

June 30, 2016 | 08:41 PM - Posted by leszy (not verified)

What about NV, with the same problem? Are they reacting?