Review Index:
Feedback

The Radeon RX 580 8GB Review - Polaris Populism

Author: Ryan Shrout
Manufacturer: AMD

Detailed Power Consumption Testing

When we started dissecting the power consumption concerns around the Radeon RX 480, that have since been mostly addressed by AMD with a driver fix and new control panel option, I knew this meant a much more strenuous power testing process going forward. 

How do we do it? Simple in theory but surprisingly difficult in practice, we are intercepting the power being sent through the PCI Express bus as well as the ATX power connectors before they go to the graphics card and are directly measuring power draw with a 10 kHz DAQ (data acquisition) device. A huge thanks goes to Allyn for getting the setup up and running. We built a PCI Express bridge that is tapped to measure both 12V and 3.3V power and built some Corsair power cables that measure the 12V coming through those as well.

The result is data that looks like this.

View Full Size

What you are looking at here is the power measured from the GTX 1080. From time 0 to time 8 seconds or so, the system is idle, from 8 seconds to about 18 seconds Steam is starting up the title. From 18-26 seconds the game is at the menus, we load the game from 26-39 seconds and then we play through our benchmark run after that.

There are four lines drawn in the graph, the 12V and 3.3V results are from the PCI Express bus interface, while the one labeled PCIE is from the PCIE power connection from the power supply to the card. We have the ability to measure two power inputs there but because the GTX 1080 only uses a single 8-pin connector, there is only one shown here. Finally, the blue line is labeled total and is simply that: a total of the other measurements to get combined power draw and usage by the graphics card in question.

From this we can see a couple of interesting data points. First, the idle power of the GTX 1080 Founders Edition is only about 7.5 watts. Second, under a gaming load of Rise of the Tomb Raider, the card is pulling about 165-170 watts on average, though there are plenty of intermittent, spikes. Keep in mind we are sampling the power at 1000/s so this kind of behavior is more or less expected.

Different games and applications impose different loads on the GPU and can cause it to draw drastically different power. Even if a game runs slowly, it may not be drawing maximum power from the card if a certain system on the GPU (memory, shaders, ROPs) is bottlenecking other systems.

First, let’s look at our total power draw numbers.

View Full Size

Click to Enlarge

There are several interesting things to gleam from this data. First, we measure the power draw of the MSI RX 580 Gaming X card at 200 watts total. That is about 30 watts more than the Radeon RX 480 from ASUS that we measure at 170 watts of power draw. For this particular game, Rise of the Tomb Raider, we saw a 1-4% improvement in performance (though the 1% was at our tested 2560x1440 resolution). This comes a power penalty of 17%!

By contrast, the GTX 1060 6GB card from EVGA is 3-6% faster than the RX 580 yet shows only 115-120 watts of power being used! That gives us an 80 watt advantage in power consumption in favor of the GTX 1060 6GB card, a 66% difference! It would be a drastic understatement to say that this result shows Pascal as the superior GPU in terms of performance efficiency.

View Full Size

Click to Enlarge

In Witcher 3, the results are very similar. Based on our measurement the RX 580 pulls over 200 watts, the RX 480 uses 175 watts, and the GTX 1060 6GB uses 125 watts. If you are a user at all interested in temperatures, noise and thermals in your system, this is something worth noting.

As for the potential for overdraw from any single source of power, how does the power distribution break down between the motherboard slot and 8-pin/6-pin power connection with the RX 580 from MSI?

View Full Size

Click to Enlarge

Metro: Last Light represents one of our worst case scenarios and indicates the MSI RX 580 Gaming X 8GB card is using nearly 210 watts. Interestingly, only 40 watts of that is coming through the PCIe bus on the motherboard while the other 170 watts is being pushed through the 8-pin power connector from the power supply. While I realize that is over the 150 watt specification for those connections, modern power supplies are overbuilt to such a degree that this result doesn't really bother us. As we see more overclocked RX 580 cards hit the market though, it's worth keeping an eye on and monitoring how particular power supplies handle the strain.


April 18, 2017 | 09:11 AM - Posted by Dark_wizzie

Well I mean, at least 580 seems like a decent deal compared to 1060.

April 18, 2017 | 09:22 AM - Posted by Ryan Shrout

It's definitely not a BAD deal, but I was kind of hoping AMD would under cut the RX 480 pricing enough to put more pressure on the NVIDA GTX 1060.

April 18, 2017 | 02:12 PM - Posted by TheTolsonator

Once again I didn't wait long enough to buy a new GPU. While the 1060 I have from EVGA is performing well enough, I could have saved a little money and gotten a card that runs a few percent better.(albeit with higher power draw but I digress)

Ahh the joys of PC building.

April 18, 2017 | 03:36 PM - Posted by Ryan Shrout

Very true, very true...

April 18, 2017 | 09:37 AM - Posted by zgradt

That's about what I expected from the 580. I was wondering if their new LPP or whatever would be a little more power efficient, but it seems like it just supports higher clockrates. My old PC has SLI 770GTX's which each pull up to 250W each under load. Those are basically rebranded and overclocked 680GTX's. The more things change the more they stay the same?

It's a shame there are no Vulkan benchmarks though.

April 18, 2017 | 02:24 PM - Posted by StephanS

About game selection, its another case where you can pick the game test so you can write the conclusion you want.

Worse offender is TechReport.

Contrast Techreport with hardwarecanucks.

One site used 3 games, most heavily favoring the GTX.
The other 14 games each at multiple resolution....

Techreport lost all its credibility long ago :(

April 18, 2017 | 03:36 PM - Posted by Ryan Shrout

I'm not going to be overly critical of anyone, but testing takes time, especially if you do it correctly. For example, I still ONLY test GPUs with hardware-based capture systems called Frame Rating, and what NVIDIA calls FCAT. It's more work, takes longer, but gives us much more accurate results. 

The only disappointing thing is when your testing finds no variation...then all the work proves that everything works fine. :)

April 18, 2017 | 05:34 PM - Posted by zgradt

Counting frame times does find problems with individual games better, but I still find myself hunting down the average framerate as an easy way to compare hardware.

I only pay close attention to the frame times if I happen to be interested in actually playing the game being benchmarked. Funnily enough, I don't. These days, it's mostly Overwatch, but everyone seems to think that its hardware requirements are too low to make a good benchmark. I do wonder why Rise of the Tomb Raider always has such low framerates, even though it's a console game port. But then again, I don't play that game, so the numbers don't mean much to me. As long as half the games do a few % better, and the other half only do a few % worse, I figure they're about equal. Still, if Vulkan ever catches on, the Radeon cards do put up some really good numbers in Doom. I wonder if that advantage will also carry over into Quake Champions.

I would like to know their relative strengths in compute though. It isn't a good measure of gaming prowess, but if I ever get the urge to fold some proteins, I'd like to know if I should even bother. Judging by the TFLOPS, the Radeon cards should be pretty decent.

April 19, 2017 | 11:40 AM - Posted by Dusty

I still applaud the effort you and your team put in for Frame Rating. Concept, debugging, and getting it to work on multiple platforms must have been a huge task. Now you've moved on to actually measuring power draw! What other sites are doing that? Not many. That extra effort will always bring my page views and clicks.

April 26, 2017 | 05:22 PM - Posted by elites2012

i have to agree with you on this.

April 18, 2017 | 09:43 AM - Posted by Xukanik

Well it looks like I might need to go team green next build. I was hoping for power consumption optimization but it does not look like it.

Hard choice being an AMD fan (Not a crazy fanboy)

Ryzen 1600 + Nvidia 1060 + 16GB Ram.

Humm that's 666 I guess it could be my Evil computer.

Unless Vega has some magical TDP for a cheap price!.

April 18, 2017 | 02:28 PM - Posted by StephanS

Check this review. Also, I'm not sure the GTX 1060 will age that well.

And unless you are bitmining 24/7, at idle/normal load it seem both card are equivalent.
By that I mean the extra 20 watts or so during gaming should be "invisible" in your power bill.

http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/75127-amd-...

April 18, 2017 | 09:54 AM - Posted by Lucidor

I was hoping for a more power efficient card at a more considerable discount. Like 20W less power draw and $200 for the 8GB and $160 for the 4GB.
This is disappointing even for a rebrand.

April 18, 2017 | 12:46 PM - Posted by Jtaylor1986

I don't understand why you care about 185w of power draw. You will save like $2.00 in electricity over the life of the card by saving 20w. As long as it has adequate cooling at an acceptable noise level power consumption isn't that important.

April 18, 2017 | 01:53 PM - Posted by Xukanik

For me the power draw matter because I want a quiet system.

Power = More Heat = Louder system.

April 18, 2017 | 04:01 PM - Posted by Jtaylor1986

This is completely false. As long as the cooling capacity of the card is equal or greater than the TDP of the part a quiet system can be achieved. Yes there are practical limits to this as you aren't going to have a 20 pound heatsink on a card or fans the size of a dinner plate but there is no technical reason why a part with a TDP of <200W can't be quiet.

April 18, 2017 | 05:02 PM - Posted by Lucidor

Anything can be quiet, but - given the same cooler designed for dissipating ~200- a card with 80W lower TDP is going to be a whole lot quieter, especially if the PC is placed somewhere with limited airflow, like is the case with mine.

April 18, 2017 | 05:19 PM - Posted by Kamgusta

Are you serious?
More W means more heat in the room. While it is nice in Winter, it is not in Summer.
Also, given the fixed size of the VGA cards (they can't be as tall as some CPU heatsinks) higher power consumption roughly translate in higher fan rpms and then noise.
NVDIA GTX1060s consume so little power they often stop spinning their fans. Even while playing games.
RX480/RX580 do not.

April 19, 2017 | 05:38 AM - Posted by Kamgusta

Well, I have to eat my hat for that.

April 19, 2017 | 12:06 PM - Posted by Jtaylor1986

Blower style cards are inherently noisier than open air designs.

April 19, 2017 | 04:22 AM - Posted by Jann5s

Power = More Heat = Louder system.

This, I completely agree. It is especially true for people who like to game in the living room and thus use tiny systems because they don't want their living room looking like a student room. Because of this I've went the nvidia way since Hawaii.

Now my eye is back on AMD because I really hope some TV manufacturer will make a free sync compatible TV soon. My guess is that this will happen before there will be a Gsync TV. Mainly because free sync will be available in the neo and scorpio.

April 19, 2017 | 12:16 PM - Posted by Jtaylor1986

This usage case (small form factor cases) is pretty much the only reasonable argument in my opinion for power consumption being a primary concern when buying a card (within reason).

April 18, 2017 | 10:08 AM - Posted by mouf

I will be the first to admit that I am an Nvidia user, but obviously I still want AMD to succeed and bring some competition to the table to push the market ahead, but this is another disappointing product launch.

I pretty much mirror Lucidor's comments.

April 18, 2017 | 10:15 AM - Posted by Toettoetdaan

It's surprising how a game selection can influence the conclusion of a review. In other reviews, the RX580 has a 8% lead over the 1060 and here it is the other way around 8% lead for the 1060.

It might be interesting to do a big write up of all game engines and their strong and weak points for different GPU's and CPU's.

Anyway, I like your review and test methodology it is an example of how it should be done!

April 18, 2017 | 11:06 AM - Posted by Ryan Shrout

Thank you sir! It's always interesting to see how things vary.

What other reviews do you see the opposite results? Curious.

April 18, 2017 | 04:38 PM - Posted by Jtaylor1986

I've read many reviews today and it's very hard to compare cross sites as your testing, graphs and data are flat out different (better) than all but maybe 1 other site, not to mention settings, and resolution differences. Most sites are still stuck in the world of average, and min fps unfortunately.

April 18, 2017 | 04:52 PM - Posted by Ryan Shrout

Its tough. There are some metrics and reporting that other outlets use that I would like to integrate but just have had time to do.

April 18, 2017 | 10:36 AM - Posted by CB

It'll be very interesting to see how the 580 overclocks with the additional available power.

A 10% overclock could put it even with the 1060.

April 18, 2017 | 11:08 AM - Posted by Ryan Shrout

Agreed. I have some results I didn't have time to put it - more likely 3-4% overclocking headroom with our sample.

April 18, 2017 | 11:18 AM - Posted by Stefem

I don't think it will overclock much, power consumption could go out of control.
The RX580 already consumes around 100W more which is almost double the similar performing GTX 1060.

April 18, 2017 | 11:20 AM - Posted by Stefem

3-4%! That sample was quite at it's limit

April 18, 2017 | 11:30 AM - Posted by Activate_AMD

Seems unlikely based on the TDP increase and relatively small clock increase that it will overclock anywhere close to 10%. The 480 didn't have a ton of headroom and this is basically just a factory OC thats probably using up most of the additional headroom vs the 480

April 18, 2017 | 10:48 AM - Posted by Mobile_Dom

personally would love to see a 550 review, brand new (to an extent silicon is always interesting, and could be interesting for a cheaper workstation going from an older dGPU or an iGPU to it and seeing the delta.

just my thoughts

April 18, 2017 | 11:08 AM - Posted by Ryan Shrout

Yup! Those launch a bit later. Hopefully I'll get hands-on with one!

April 18, 2017 | 10:48 AM - Posted by Activate_AMD

So nothing new, it is basically just a factory overclock masquerading under a new model number... AMD even calls it "Polaris 20" according to PcPer's chart. Seems a bit overzealous for an 80mhz OC.

I'm not even sure what AMD is trying to do with this refresh. The RX560 and 550 are new, so I guess AMD took the opportunity to do a line-wide rebrand to help sell a bunch of low end chips? Is that worth the cynicism thats inevitably generated by the irrelevant upgrades to the 480/470? I feel like this launch says something about where Vega is right now, because if it were me I would have held off on the Polaris re-brand until Vega dropped in order to leverage Vega hype across the brand. How long will AMD go without a high end card? Nvidia has been uncontested above $250 for a year now.

April 18, 2017 | 12:52 PM - Posted by Jtaylor1986

I think the answer to your question is they were somewhat backed into a corner to have to rebrand this as it's clear they don't have the resources to launch a full top to bottom lineup of chips anymore in a reasonable time frame. If they didn't create 500 series brand they would have had 3 new chips ( Vega 10,11 and Polaris 12) in the same product naming family as 2 older chips (Polaris 10 and 11). Basically the best of bad options, with RX580 being pretty shameless while RX570 and RX560 at least offer a bit more compelling value this time around.

April 18, 2017 | 03:59 PM - Posted by terminal addict

Surprising that so many are critical of AMD's rebranding of Polaris. Intel launched a "7th gen" CPU that was essentially a slightly overclocked 6th gen CPU. There were some people calling Intel on the rebrand, but no where near to the degree people are doing with RX500 series.

I'm not saying I approve, but this just seems to be the norm now. Not sure why AMD has to be the company singled out the most.

April 18, 2017 | 09:10 PM - Posted by Activate_AMD

No argument from me that the difference between Skylake and Kaby Lake is so minimal that it is effectively a re-brand. The difference is that stock performance increased for zero power increase. That tells you two things may be going on - they're getting better bins, or they actually made a process change. The RX580 gives you a small clock speed bump at a tangibly increased TDP... i.e. they just overclocked the card and called it a day.

Consider their relative positions as well. Intel is in a position of strength, and has been for a long time. Criticize them for not pushing the envelope, but unfortunately thats what you get when R&D is expensive and competition is nonexistent. AMD is the clear underdog and has been for years now (speaking about GPU's now). They need a win, or at least something to claw back mind share, and this is what they give us? After a year of Pascal crushing the high end? They should have been primed to drop a big new card with the node change because everyone knew there would be a huge wave of people upgrading old 28nm cards. Ok, so they lead with Polaris because Vega on 14nm is a bit too risky, but its been a year now. How many sales have been lost to people who waited 6 months and gave up?

Having this crappy re-brand come before Vega feels like a blunder because it just highlights how slow they've been to bring out something on the high end. They should have launched 560/560 and gotten us ready for Vega launched under a 5xx moniker like NV did the 750T then brought the 580/570 rebrands out at the same time. Makes me wonder if Vega is still so far out that strategy would not be viable

April 18, 2017 | 11:48 AM - Posted by Kamgusta

Well, i really see no point in buying a RX480/RX580 while the GTX1060 6GB has a 5% performance advantage, consumes a lot less power, has a more sophisticated suite of drivers and costs roughly the same. And if you manage to find one of the newer 9Gbps GTX1060, that's jackpot.

April 18, 2017 | 12:59 PM - Posted by Jtaylor1986

It depends on which games you want to play. I'd read reviews that test the games you want to play as this is a fairly small sample size and doesn't include some very popular titles, like BF1, Total War series, Civilization 6, Deus Ex Mankind Divided etc.

April 18, 2017 | 02:04 PM - Posted by Ryan Shrout

Obviously every review will have a unique sample of games to test, but I would counter that the Total War games and possibly Civ 6, don't count as "very popular titles." 

Again, to each their own of course. The more data out there, the better.

April 18, 2017 | 04:08 PM - Posted by Polycrastinator

So I understand they may be less popular, but it would be nice to see at least one strategy game in there. You hit a lot of other marks, something with a lot of small units such as a late game Civ benchmark would be an interesting addition (I'd say Ashes, but I worry a bit they put their thumb on the scale for AMD).
For that matter, some sort of compute/creative benchmark would be interesting, too.

April 18, 2017 | 04:25 PM - Posted by Jtaylor1986

http://store.steampowered.com/stats/

As I write this Dirty Rally, Rise of the Tomb Raider and Hitman are not even in the top 100 titles currently being played on Steam right now. There are 4 Total War titles on the and 2 civilization games the list though, so I'm not sure what data you are looking at. Also there isn't a single game on the list from either EA or Ubisoft. I'm not saying that including more titles will change overall conclusions because I have no idea but forming them from a sample of 6 IMO is a bit too small.

April 18, 2017 | 04:52 PM - Posted by Ryan Shrout

Point taken.

However, with Origin and Uplay out there, the metrics aren't covering all bases.

April 18, 2017 | 03:57 PM - Posted by Kamgusta

To the commenter above: I did.

Techpowerup tested 22 games while Anandtech only 9. Aside 1 or maybe 2 titles, we always see the RX480/RX580 behind the GTX1060 6GB. In GTA V, by a large margin. So, nothing changed since the launch of the RX480 and I can make an educated guess that nothing will change in the future, either.

But I don't really care for the exact amount of performance deficit (does it really matter if it is 4%, 7% or 9%? not for me), so I roughly translated it in a 5% deficit. I rounded it up for not making AMD fans crying. And for making things easier.

What matters is that the RX480/RX580 don't deliver the same FPS as the GTX1060 6GB. And, as they cost the same amount of money (again, a 5$ or 10$ difference doesn't really matter) while consuming a lot more power, I really see no points in choosing them instead of a shiny, brand new, GTX1060 6GB.

April 18, 2017 | 05:46 PM - Posted by Hood

Haven't seen any 9GB 1060s, but $750 will get you this 8GB 1060 - https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=...

April 18, 2017 | 06:27 PM - Posted by Kamgusta

Newegg got the product details wrong: it is a GTX1080.

April 18, 2017 | 01:07 PM - Posted by Jtaylor1986

I know this a lot of work Ryan but any chance we can add more titles to the testing in the future. I think you made good choices of very popular titles, but it's hard to say if this is a representative sample of the performance gamers will find off the beaten path so to speak. Also AFAIK Dirty Rally is the only title in your suite that didn't have heavy AMD/NVidia involvement with development.

April 18, 2017 | 02:06 PM - Posted by Ryan Shrout

I honestly couldn't care less about who was "involved" in the games. To some degree its good to know for anecdotal information, but we look at games that are popular, fit a niche (getting some first person, third person, racing, etc. games in there) and push the GPU a bit more than others.

But to your point, Hitman was definitely an AMD-centric title, as was Dirt Rally.

April 18, 2017 | 03:14 PM - Posted by Jann5s

I was really hoping this iteration of 14nm could squeeze out a few more Mhz without the TDP. I thought at the time that the RX480 could have been hindered by processing challenges that could be resolved in this v2 of the chip. Sadly, it seems polaris 10 (20) is at its limits.

P.S. just noticed, no more anonymous posts? (I like it)

April 18, 2017 | 04:53 PM - Posted by Ryan Shrout

It's still possible the process is partly responsible, but it just hasn't been refined enough for us to see any differences. 

April 18, 2017 | 05:10 PM - Posted by pdjblum

i appreciate the re-branding, even if it is only a small deal, as it helps me to know what I am buying

rather than guessing whether this is a revised 480 or the original when I go to buy, i like being able to know which revision i am buying, so when i get a 580 i know what i am getting

so much more civil and pleasant without the anonymous posts

thanks so much

April 18, 2017 | 09:48 PM - Posted by Sebastian Peak

You make a good point as to product differentiation. While products based on the same core can be dismissed as 're-brands', it is also fair to say that providing a clear differentiation to the end-user is valid. I think the question becomes whether introduction of a new series (5xx from 4xx) is required for faster SKUs, or would a variant product name (RX 480X or RX 485 for example) be enough?

April 19, 2017 | 01:58 AM - Posted by pdjblum

sebastian, even V2 would have been sufficient

i recall mobos used to have version numbers, which was essential to know

but i don't fault them at all as i believe lisa and raja are about the most humble and honest and brilliant people amd has had running the place for quite some time, and i don't get that they are being deliberately deceptive with the naming scheme

it's just fuckin numbers anyway, but i guess the community takes it all very seriously

lg and samsung release a new tv each year, usually with only marginal changes from the year before, but they refer to the models by year, as you know, and I certainly appreciate knowing that

it is reasonable for the change to 500 to distance it from all the bad publicity the 480 got with the power distribution issue, don't you think?

i just appreciate that companies like amd and intel and nvidia and the rest make such cool shit for me to enjoy

i am so fucking lucky as far as that goes, so they can call the shit whatever they want as long as i know what i am getting

i recently decided to try an induction cooktop, single burner, because i hate using the fancy electric range in my very high end apt in austin, and once again, all i can feel is how crazy cool it is that I can own such awesome tech, and for only $125 or so

that goes for all my stereo equipment and my japanese ceramic knife, which is my favorite tech of all

so with all this amazing goodness which i don't deserve, how can i fault these folks for some numbers

April 18, 2017 | 10:35 PM - Posted by Pingjockey

The big question is really at this point is if this card a valid upgrade path? For example, I currently have a R9 Fury from sapphire which I got on sale at a great price point. Assuming I wish to stay with AMD, is the 580 an option for an upgrade? From the power usage side of things one would say yes it's hard to say about performance.

Again, great article and truly enjoy the site!

April 19, 2017 | 09:08 AM - Posted by Activate_AMD

If you didn't think the 480 was a worthy upgrade, then the 580 isn't either. If you want a GPU that occupies the same place in the product stack that Fury occupied when it was initially released, you can't get one from AMD and should wait for Vega if you feel compelled to stay AMD

April 19, 2017 | 10:56 AM - Posted by Ryan Shrout

Yeah, in your case, if you aren't considering NVIDIA, you should wait.

April 19, 2017 | 01:17 AM - Posted by Abyssal Radon

After looking into many reviews of this Polaris refresh, I've gotta say I'm extremely disappointed with AMD for this one. I fully understand the market they are chasing with this refresh/rehash of GPU's. I sometimes scratch my head with re-brands 99% of the time, I sorta find these cards somewhat misleading. Let's take a RX 480 and compare it to a RX 580, well they are pretty much the same thing with a slightly higher base clock on the core. Now at a slightly lower price point is nice upgrade, for someone still using a ~380X range of GPU's (AMD or nVidia). Perhaps I'm disappointed because these GPU's wouldn't be a worthy upgrade for me... I suppose I'm trying to make a point of, please do not release a re-brand (literally) and focus on the good stuff aka Vega.

Awesome review as always and have been a long time reader. So GG Mr. Shrout.

April 19, 2017 | 10:57 AM - Posted by Ryan Shrout

Thanks!

I am 100% in agreement that had this refresh come with a $30-40 price cut, they would sell a lot more of them and possible cut into the market share of NVIDIA's 1060 product line.

April 19, 2017 | 12:27 PM - Posted by Jtaylor1986

I would wager a guess that with a $30-$40 price cut they would be selling these cards at basically break even. At <20% market share that is something you might have to do over the short term to stay relevant or die. At ~30% market share sustainable profitability becomes the driving factor.

April 19, 2017 | 11:02 PM - Posted by Streetguru

No overclocking? Trying to get a feel if 1500mhz will be close to the average OC.

Radeon Chill can drastically help with power usage, but I get that many people probably don't care, or don't play a game it supports.

Far as the 580 vs 1060 goes I'd probably still stick to the 580 for the bit of extra VRAM, but mostly for free-sync cash savings.

Although don't a majority of the games you tested here lean towards nvidia cards in general?

Following that, sad to not see DOOM Vulkan, a 480/580 touches nearer a 1070 in that game, DX12/Vulkan needs to get here sooner if they can all be that well optimized.

April 20, 2017 | 12:33 AM - Posted by James

While it is great that you have the gear to measure the power draw of the card precisely, I still would like at the wall measurements. Nvidia's driver puts a much heavier load on the CPU, so you aren't getting quite the full story just measuring the power consumption of the card. I don't care too much about load power anyway, but it isn't that hard to get some at the wall measurements. I care a little bit more about idle power but electricity is cheap. I replaced three 75 watt incandescent bulbs in my kitchen with LED bulbs at 15 watts each. So I saved 180 watts for lights that are on for hours every day. I didn't notice any meaningful difference in my electricity bill.

April 20, 2017 | 08:05 AM - Posted by analogue

Apparently the MSI comes "over-voltaged".
It can actually run at 1393mhz at lower voltages, pulling less power!
https://youtu.be/MQ9ro5pwfXY

Also, it would be nice to test the benefits of chill.
Lets assume that a gamer with a freesync monitor with a refresh rate of 75hz doesn't gain anything in having a card pulling 100fps or 80fps. Assuming that the play experience would be the same if chill is looked at 75fps, how much would chill reduce the consumption?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.