Review Index:
Feedback

The Radeon RX 580 8GB Review - Polaris Populism

Author: Ryan Shrout
Manufacturer: AMD

Testing Suite and Methodology Update

If you have followed our graphics testing at PC Perspective you’ll know about a drastic shift we made in 2012 to support a technology we called Frame Rating. Frame Rating use the direct capture of output from the system into uncompressed video files and FCAT-style scripts to analyze the video to produce statistics including frame rates, frame times, frame time variance and game smoothness.

Readers and listeners might have also heard about the issues surrounding the move to DirectX 12 and UWP (Unified Windows Platform) and how it affected our testing methods. Our benchmarking process depends on a secondary application running in the background on the tested PC that draws colored overlays along the left hand side of the screen in a repeating pattern to help us measure performance after the fact. The overlay we have been using supported DirectX 9, 10 and 11, but didn’t work with DX12 or UWP games.

We worked with NVIDIA to fix that and we have an overlay that behaves exactly in the same way as before, but it now will let us properly measure performance and smoothness on DX12 and UWP games. This is a big step to maintaining the detailed analytics of game performance that enable us to push both game developers and hardware vendors to perfect their products and create the best possible gaming experiences for consumers.

So, as a result, our testing suite has been upgraded with a new collection of games and tests. Included in this review are the following:

  • 3DMark Fire Strike Extreme and Ultra
  • Unigine Heaven 4.0
  • Dirt Rally (DX11)
  • Fallout 4 (DX11)
  • Gears of War Ultimate Edition (DX12/UWP)
  • Grand Theft Auto V (DX11)
  • Hitman (DX12)
  • Rise of the Tomb Raider (DX12)
  • The Witcher 3 (DX11)

We have included racing games, third person, first person, DX11, DX12, UWP and some synthetics, going for a mix that I think encapsulates the gaming market of today and the future as best as possible. Hopefully we can finally end the bickering in comments about not using DX12 titles in our GPU reviews! (Ha, right.)

Our GPU testbed remains unchanged, including an 8-core Haswell-E processor and plenty of memory and storage.

  PC Perspective GPU Testbed
Processor Intel Core i7-5960X Haswell-E
Motherboard ASUS Rampage V Extreme X99
Memory G.Skill Ripjaws 16GB DDR4-3200
Storage OCZ Agility 4 256GB (OS)
Adata SP610 500GB (games)
Power Supply Corsair AX1500i 1500 watt
OS Windows 10 x64
Drivers AMD: 17.10 (Press)
NVIDIA: 381.65

View Full Size

For those of you that have never read about our Frame Rating capture-based performance analysis system, the following section is for you. If you have, feel free to jump straight into the benchmark action!!

 

Frame Rating: Our Testing Process

If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online.  Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more. 

This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options.  So please, read up on the full discussion about our Frame Rating methods before moving forward!!

While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own. 

The PCPER FRAPS File

Previous example data

While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers.  The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph.  This will basically emulate the data we have been showing you for the past several years.

 

The PCPER Observed FPS File

Previous example data

This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above.  This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences. 

As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.

 

The PLOT File

Previous example data

The primary file that is generated from the extracted data is a plot of calculated frame times including runts.  The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer.  A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.

 

The RUN File

While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result.  It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience. 

Previous example data

For tests that show no runts or drops, the data is pretty clean.  This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.

Previous example data

A test that does have runts and drops will look much different.  The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does.  Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.

The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen.  The larger the area of yellow the more often those runts are appearing.

Finally, the blue line is the measured FPS over each second after removing the runts and drops.  We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.

 

The PERcentile File

Previous example data

Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling.  In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS.  This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run.  The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected. 

The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time.  A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.

 

The PCPER Frame Time Variance File

Of all the data we are presenting, this is probably the one that needs the most discussion.  In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected.  As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels.  Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?

We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer.  However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT.  To be more specific, stutter is only perceived when there is a break from the previous animation frame rates. 

Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames.  Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter.  Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.

Previous example data

While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions.  So much in fact that I am going to call this data the PCPER ISU, which beer fans will appreciate as the acronym of International Stutter Units.

To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames.  There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene.  What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.


April 18, 2017 | 09:11 AM - Posted by Dark_wizzie

Well I mean, at least 580 seems like a decent deal compared to 1060.

April 18, 2017 | 09:22 AM - Posted by Ryan Shrout

It's definitely not a BAD deal, but I was kind of hoping AMD would under cut the RX 480 pricing enough to put more pressure on the NVIDA GTX 1060.

April 18, 2017 | 02:12 PM - Posted by TheTolsonator

Once again I didn't wait long enough to buy a new GPU. While the 1060 I have from EVGA is performing well enough, I could have saved a little money and gotten a card that runs a few percent better.(albeit with higher power draw but I digress)

Ahh the joys of PC building.

April 18, 2017 | 03:36 PM - Posted by Ryan Shrout

Very true, very true...

March 14, 2018 | 12:05 AM - Posted by baadkiid (not verified)

file:///C:/Users/blpri/Pictures/Saved%20Pictures/Unigine_Heaven_Benchmar...

April 18, 2017 | 09:37 AM - Posted by zgradt

That's about what I expected from the 580. I was wondering if their new LPP or whatever would be a little more power efficient, but it seems like it just supports higher clockrates. My old PC has SLI 770GTX's which each pull up to 250W each under load. Those are basically rebranded and overclocked 680GTX's. The more things change the more they stay the same?

It's a shame there are no Vulkan benchmarks though.

April 18, 2017 | 02:24 PM - Posted by StephanS

About game selection, its another case where you can pick the game test so you can write the conclusion you want.

Worse offender is TechReport.

Contrast Techreport with hardwarecanucks.

One site used 3 games, most heavily favoring the GTX.
The other 14 games each at multiple resolution....

Techreport lost all its credibility long ago :(

April 18, 2017 | 03:36 PM - Posted by Ryan Shrout

I'm not going to be overly critical of anyone, but testing takes time, especially if you do it correctly. For example, I still ONLY test GPUs with hardware-based capture systems called Frame Rating, and what NVIDIA calls FCAT. It's more work, takes longer, but gives us much more accurate results. 

The only disappointing thing is when your testing finds no variation...then all the work proves that everything works fine. :)

April 18, 2017 | 05:34 PM - Posted by zgradt

Counting frame times does find problems with individual games better, but I still find myself hunting down the average framerate as an easy way to compare hardware.

I only pay close attention to the frame times if I happen to be interested in actually playing the game being benchmarked. Funnily enough, I don't. These days, it's mostly Overwatch, but everyone seems to think that its hardware requirements are too low to make a good benchmark. I do wonder why Rise of the Tomb Raider always has such low framerates, even though it's a console game port. But then again, I don't play that game, so the numbers don't mean much to me. As long as half the games do a few % better, and the other half only do a few % worse, I figure they're about equal. Still, if Vulkan ever catches on, the Radeon cards do put up some really good numbers in Doom. I wonder if that advantage will also carry over into Quake Champions.

I would like to know their relative strengths in compute though. It isn't a good measure of gaming prowess, but if I ever get the urge to fold some proteins, I'd like to know if I should even bother. Judging by the TFLOPS, the Radeon cards should be pretty decent.

April 19, 2017 | 11:40 AM - Posted by Dusty

I still applaud the effort you and your team put in for Frame Rating. Concept, debugging, and getting it to work on multiple platforms must have been a huge task. Now you've moved on to actually measuring power draw! What other sites are doing that? Not many. That extra effort will always bring my page views and clicks.

April 26, 2017 | 05:22 PM - Posted by elites2012

i have to agree with you on this.

April 18, 2017 | 09:43 AM - Posted by Xukanik

Well it looks like I might need to go team green next build. I was hoping for power consumption optimization but it does not look like it.

Hard choice being an AMD fan (Not a crazy fanboy)

Ryzen 1600 + Nvidia 1060 + 16GB Ram.

Humm that's 666 I guess it could be my Evil computer.

Unless Vega has some magical TDP for a cheap price!.

April 18, 2017 | 02:28 PM - Posted by StephanS

Check this review. Also, I'm not sure the GTX 1060 will age that well.

And unless you are bitmining 24/7, at idle/normal load it seem both card are equivalent.
By that I mean the extra 20 watts or so during gaming should be "invisible" in your power bill.

http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/75127-amd-...

April 18, 2017 | 09:54 AM - Posted by Lucidor

I was hoping for a more power efficient card at a more considerable discount. Like 20W less power draw and $200 for the 8GB and $160 for the 4GB.
This is disappointing even for a rebrand.

April 18, 2017 | 12:46 PM - Posted by Jtaylor1986

I don't understand why you care about 185w of power draw. You will save like $2.00 in electricity over the life of the card by saving 20w. As long as it has adequate cooling at an acceptable noise level power consumption isn't that important.

April 18, 2017 | 01:53 PM - Posted by Xukanik

For me the power draw matter because I want a quiet system.

Power = More Heat = Louder system.

April 18, 2017 | 04:01 PM - Posted by Jtaylor1986

This is completely false. As long as the cooling capacity of the card is equal or greater than the TDP of the part a quiet system can be achieved. Yes there are practical limits to this as you aren't going to have a 20 pound heatsink on a card or fans the size of a dinner plate but there is no technical reason why a part with a TDP of <200W can't be quiet.

April 18, 2017 | 05:02 PM - Posted by Lucidor

Anything can be quiet, but - given the same cooler designed for dissipating ~200- a card with 80W lower TDP is going to be a whole lot quieter, especially if the PC is placed somewhere with limited airflow, like is the case with mine.

April 18, 2017 | 05:19 PM - Posted by Kamgusta

Are you serious?
More W means more heat in the room. While it is nice in Winter, it is not in Summer.
Also, given the fixed size of the VGA cards (they can't be as tall as some CPU heatsinks) higher power consumption roughly translate in higher fan rpms and then noise.
NVDIA GTX1060s consume so little power they often stop spinning their fans. Even while playing games.
RX480/RX580 do not.

April 19, 2017 | 05:38 AM - Posted by Kamgusta

Well, I have to eat my hat for that.

April 19, 2017 | 12:06 PM - Posted by Jtaylor1986

Blower style cards are inherently noisier than open air designs.

April 19, 2017 | 04:22 AM - Posted by Jann5s

Power = More Heat = Louder system.

This, I completely agree. It is especially true for people who like to game in the living room and thus use tiny systems because they don't want their living room looking like a student room. Because of this I've went the nvidia way since Hawaii.

Now my eye is back on AMD because I really hope some TV manufacturer will make a free sync compatible TV soon. My guess is that this will happen before there will be a Gsync TV. Mainly because free sync will be available in the neo and scorpio.

April 19, 2017 | 12:16 PM - Posted by Jtaylor1986

This usage case (small form factor cases) is pretty much the only reasonable argument in my opinion for power consumption being a primary concern when buying a card (within reason).

April 18, 2017 | 10:08 AM - Posted by mouf

I will be the first to admit that I am an Nvidia user, but obviously I still want AMD to succeed and bring some competition to the table to push the market ahead, but this is another disappointing product launch.

I pretty much mirror Lucidor's comments.

April 18, 2017 | 10:15 AM - Posted by Toettoetdaan

It's surprising how a game selection can influence the conclusion of a review. In other reviews, the RX580 has a 8% lead over the 1060 and here it is the other way around 8% lead for the 1060.

It might be interesting to do a big write up of all game engines and their strong and weak points for different GPU's and CPU's.

Anyway, I like your review and test methodology it is an example of how it should be done!

April 18, 2017 | 11:06 AM - Posted by Ryan Shrout

Thank you sir! It's always interesting to see how things vary.

What other reviews do you see the opposite results? Curious.

April 18, 2017 | 04:38 PM - Posted by Jtaylor1986

I've read many reviews today and it's very hard to compare cross sites as your testing, graphs and data are flat out different (better) than all but maybe 1 other site, not to mention settings, and resolution differences. Most sites are still stuck in the world of average, and min fps unfortunately.

April 18, 2017 | 04:52 PM - Posted by Ryan Shrout

Its tough. There are some metrics and reporting that other outlets use that I would like to integrate but just have had time to do.

April 18, 2017 | 10:36 AM - Posted by CB

It'll be very interesting to see how the 580 overclocks with the additional available power.

A 10% overclock could put it even with the 1060.

April 18, 2017 | 11:08 AM - Posted by Ryan Shrout

Agreed. I have some results I didn't have time to put it - more likely 3-4% overclocking headroom with our sample.

April 18, 2017 | 11:18 AM - Posted by Stefem

I don't think it will overclock much, power consumption could go out of control.
The RX580 already consumes around 100W more which is almost double the similar performing GTX 1060.

April 18, 2017 | 11:20 AM - Posted by Stefem

3-4%! That sample was quite at it's limit

April 18, 2017 | 11:30 AM - Posted by Activate_AMD

Seems unlikely based on the TDP increase and relatively small clock increase that it will overclock anywhere close to 10%. The 480 didn't have a ton of headroom and this is basically just a factory OC thats probably using up most of the additional headroom vs the 480

April 18, 2017 | 10:48 AM - Posted by Mobile_Dom

personally would love to see a 550 review, brand new (to an extent silicon is always interesting, and could be interesting for a cheaper workstation going from an older dGPU or an iGPU to it and seeing the delta.

just my thoughts

April 18, 2017 | 11:08 AM - Posted by Ryan Shrout

Yup! Those launch a bit later. Hopefully I'll get hands-on with one!

April 18, 2017 | 10:48 AM - Posted by Activate_AMD

So nothing new, it is basically just a factory overclock masquerading under a new model number... AMD even calls it "Polaris 20" according to PcPer's chart. Seems a bit overzealous for an 80mhz OC.

I'm not even sure what AMD is trying to do with this refresh. The RX560 and 550 are new, so I guess AMD took the opportunity to do a line-wide rebrand to help sell a bunch of low end chips? Is that worth the cynicism thats inevitably generated by the irrelevant upgrades to the 480/470? I feel like this launch says something about where Vega is right now, because if it were me I would have held off on the Polaris re-brand until Vega dropped in order to leverage Vega hype across the brand. How long will AMD go without a high end card? Nvidia has been uncontested above $250 for a year now.

April 18, 2017 | 12:52 PM - Posted by Jtaylor1986

I think the answer to your question is they were somewhat backed into a corner to have to rebrand this as it's clear they don't have the resources to launch a full top to bottom lineup of chips anymore in a reasonable time frame. If they didn't create 500 series brand they would have had 3 new chips ( Vega 10,11 and Polaris 12) in the same product naming family as 2 older chips (Polaris 10 and 11). Basically the best of bad options, with RX580 being pretty shameless while RX570 and RX560 at least offer a bit more compelling value this time around.

April 18, 2017 | 03:59 PM - Posted by terminal addict

Surprising that so many are critical of AMD's rebranding of Polaris. Intel launched a "7th gen" CPU that was essentially a slightly overclocked 6th gen CPU. There were some people calling Intel on the rebrand, but no where near to the degree people are doing with RX500 series.

I'm not saying I approve, but this just seems to be the norm now. Not sure why AMD has to be the company singled out the most.

April 18, 2017 | 09:10 PM - Posted by Activate_AMD

No argument from me that the difference between Skylake and Kaby Lake is so minimal that it is effectively a re-brand. The difference is that stock performance increased for zero power increase. That tells you two things may be going on - they're getting better bins, or they actually made a process change. The RX580 gives you a small clock speed bump at a tangibly increased TDP... i.e. they just overclocked the card and called it a day.

Consider their relative positions as well. Intel is in a position of strength, and has been for a long time. Criticize them for not pushing the envelope, but unfortunately thats what you get when R&D is expensive and competition is nonexistent. AMD is the clear underdog and has been for years now (speaking about GPU's now). They need a win, or at least something to claw back mind share, and this is what they give us? After a year of Pascal crushing the high end? They should have been primed to drop a big new card with the node change because everyone knew there would be a huge wave of people upgrading old 28nm cards. Ok, so they lead with Polaris because Vega on 14nm is a bit too risky, but its been a year now. How many sales have been lost to people who waited 6 months and gave up?

Having this crappy re-brand come before Vega feels like a blunder because it just highlights how slow they've been to bring out something on the high end. They should have launched 560/560 and gotten us ready for Vega launched under a 5xx moniker like NV did the 750T then brought the 580/570 rebrands out at the same time. Makes me wonder if Vega is still so far out that strategy would not be viable

April 18, 2017 | 11:48 AM - Posted by Kamgusta

Well, i really see no point in buying a RX480/RX580 while the GTX1060 6GB has a 5% performance advantage, consumes a lot less power, has a more sophisticated suite of drivers and costs roughly the same. And if you manage to find one of the newer 9Gbps GTX1060, that's jackpot.

April 18, 2017 | 12:59 PM - Posted by Jtaylor1986

It depends on which games you want to play. I'd read reviews that test the games you want to play as this is a fairly small sample size and doesn't include some very popular titles, like BF1, Total War series, Civilization 6, Deus Ex Mankind Divided etc.

April 18, 2017 | 02:04 PM - Posted by Ryan Shrout

Obviously every review will have a unique sample of games to test, but I would counter that the Total War games and possibly Civ 6, don't count as "very popular titles." 

Again, to each their own of course. The more data out there, the better.

April 18, 2017 | 04:08 PM - Posted by Polycrastinator

So I understand they may be less popular, but it would be nice to see at least one strategy game in there. You hit a lot of other marks, something with a lot of small units such as a late game Civ benchmark would be an interesting addition (I'd say Ashes, but I worry a bit they put their thumb on the scale for AMD).
For that matter, some sort of compute/creative benchmark would be interesting, too.

April 18, 2017 | 04:25 PM - Posted by Jtaylor1986

http://store.steampowered.com/stats/

As I write this Dirty Rally, Rise of the Tomb Raider and Hitman are not even in the top 100 titles currently being played on Steam right now. There are 4 Total War titles on the and 2 civilization games the list though, so I'm not sure what data you are looking at. Also there isn't a single game on the list from either EA or Ubisoft. I'm not saying that including more titles will change overall conclusions because I have no idea but forming them from a sample of 6 IMO is a bit too small.

April 18, 2017 | 04:52 PM - Posted by Ryan Shrout

Point taken.

However, with Origin and Uplay out there, the metrics aren't covering all bases.

April 18, 2017 | 03:57 PM - Posted by Kamgusta

To the commenter above: I did.

Techpowerup tested 22 games while Anandtech only 9. Aside 1 or maybe 2 titles, we always see the RX480/RX580 behind the GTX1060 6GB. In GTA V, by a large margin. So, nothing changed since the launch of the RX480 and I can make an educated guess that nothing will change in the future, either.

But I don't really care for the exact amount of performance deficit (does it really matter if it is 4%, 7% or 9%? not for me), so I roughly translated it in a 5% deficit. I rounded it up for not making AMD fans crying. And for making things easier.

What matters is that the RX480/RX580 don't deliver the same FPS as the GTX1060 6GB. And, as they cost the same amount of money (again, a 5$ or 10$ difference doesn't really matter) while consuming a lot more power, I really see no points in choosing them instead of a shiny, brand new, GTX1060 6GB.

April 18, 2017 | 05:46 PM - Posted by Hood

Haven't seen any 9GB 1060s, but $750 will get you this 8GB 1060 - https://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=...

April 18, 2017 | 06:27 PM - Posted by Kamgusta

Newegg got the product details wrong: it is a GTX1080.

April 18, 2017 | 01:07 PM - Posted by Jtaylor1986

I know this a lot of work Ryan but any chance we can add more titles to the testing in the future. I think you made good choices of very popular titles, but it's hard to say if this is a representative sample of the performance gamers will find off the beaten path so to speak. Also AFAIK Dirty Rally is the only title in your suite that didn't have heavy AMD/NVidia involvement with development.

April 18, 2017 | 02:06 PM - Posted by Ryan Shrout

I honestly couldn't care less about who was "involved" in the games. To some degree its good to know for anecdotal information, but we look at games that are popular, fit a niche (getting some first person, third person, racing, etc. games in there) and push the GPU a bit more than others.

But to your point, Hitman was definitely an AMD-centric title, as was Dirt Rally.

April 18, 2017 | 03:14 PM - Posted by Jann5s

I was really hoping this iteration of 14nm could squeeze out a few more Mhz without the TDP. I thought at the time that the RX480 could have been hindered by processing challenges that could be resolved in this v2 of the chip. Sadly, it seems polaris 10 (20) is at its limits.

P.S. just noticed, no more anonymous posts? (I like it)

April 18, 2017 | 04:53 PM - Posted by Ryan Shrout

It's still possible the process is partly responsible, but it just hasn't been refined enough for us to see any differences. 

April 18, 2017 | 05:10 PM - Posted by pdjblum

i appreciate the re-branding, even if it is only a small deal, as it helps me to know what I am buying

rather than guessing whether this is a revised 480 or the original when I go to buy, i like being able to know which revision i am buying, so when i get a 580 i know what i am getting

so much more civil and pleasant without the anonymous posts

thanks so much

April 18, 2017 | 09:48 PM - Posted by Sebastian Peak

You make a good point as to product differentiation. While products based on the same core can be dismissed as 're-brands', it is also fair to say that providing a clear differentiation to the end-user is valid. I think the question becomes whether introduction of a new series (5xx from 4xx) is required for faster SKUs, or would a variant product name (RX 480X or RX 485 for example) be enough?

April 19, 2017 | 01:58 AM - Posted by pdjblum

sebastian, even V2 would have been sufficient

i recall mobos used to have version numbers, which was essential to know

but i don't fault them at all as i believe lisa and raja are about the most humble and honest and brilliant people amd has had running the place for quite some time, and i don't get that they are being deliberately deceptive with the naming scheme

it's just fuckin numbers anyway, but i guess the community takes it all very seriously

lg and samsung release a new tv each year, usually with only marginal changes from the year before, but they refer to the models by year, as you know, and I certainly appreciate knowing that

it is reasonable for the change to 500 to distance it from all the bad publicity the 480 got with the power distribution issue, don't you think?

i just appreciate that companies like amd and intel and nvidia and the rest make such cool shit for me to enjoy

i am so fucking lucky as far as that goes, so they can call the shit whatever they want as long as i know what i am getting

i recently decided to try an induction cooktop, single burner, because i hate using the fancy electric range in my very high end apt in austin, and once again, all i can feel is how crazy cool it is that I can own such awesome tech, and for only $125 or so

that goes for all my stereo equipment and my japanese ceramic knife, which is my favorite tech of all

so with all this amazing goodness which i don't deserve, how can i fault these folks for some numbers

April 18, 2017 | 10:35 PM - Posted by Pingjockey

The big question is really at this point is if this card a valid upgrade path? For example, I currently have a R9 Fury from sapphire which I got on sale at a great price point. Assuming I wish to stay with AMD, is the 580 an option for an upgrade? From the power usage side of things one would say yes it's hard to say about performance.

Again, great article and truly enjoy the site!

April 19, 2017 | 09:08 AM - Posted by Activate_AMD

If you didn't think the 480 was a worthy upgrade, then the 580 isn't either. If you want a GPU that occupies the same place in the product stack that Fury occupied when it was initially released, you can't get one from AMD and should wait for Vega if you feel compelled to stay AMD

April 19, 2017 | 10:56 AM - Posted by Ryan Shrout

Yeah, in your case, if you aren't considering NVIDIA, you should wait.

April 19, 2017 | 01:17 AM - Posted by Abyssal Radon

After looking into many reviews of this Polaris refresh, I've gotta say I'm extremely disappointed with AMD for this one. I fully understand the market they are chasing with this refresh/rehash of GPU's. I sometimes scratch my head with re-brands 99% of the time, I sorta find these cards somewhat misleading. Let's take a RX 480 and compare it to a RX 580, well they are pretty much the same thing with a slightly higher base clock on the core. Now at a slightly lower price point is nice upgrade, for someone still using a ~380X range of GPU's (AMD or nVidia). Perhaps I'm disappointed because these GPU's wouldn't be a worthy upgrade for me... I suppose I'm trying to make a point of, please do not release a re-brand (literally) and focus on the good stuff aka Vega.

Awesome review as always and have been a long time reader. So GG Mr. Shrout.

April 19, 2017 | 10:57 AM - Posted by Ryan Shrout

Thanks!

I am 100% in agreement that had this refresh come with a $30-40 price cut, they would sell a lot more of them and possible cut into the market share of NVIDIA's 1060 product line.

April 19, 2017 | 12:27 PM - Posted by Jtaylor1986

I would wager a guess that with a $30-$40 price cut they would be selling these cards at basically break even. At <20% market share that is something you might have to do over the short term to stay relevant or die. At ~30% market share sustainable profitability becomes the driving factor.

April 19, 2017 | 11:02 PM - Posted by Streetguru

No overclocking? Trying to get a feel if 1500mhz will be close to the average OC.

Radeon Chill can drastically help with power usage, but I get that many people probably don't care, or don't play a game it supports.

Far as the 580 vs 1060 goes I'd probably still stick to the 580 for the bit of extra VRAM, but mostly for free-sync cash savings.

Although don't a majority of the games you tested here lean towards nvidia cards in general?

Following that, sad to not see DOOM Vulkan, a 480/580 touches nearer a 1070 in that game, DX12/Vulkan needs to get here sooner if they can all be that well optimized.

April 20, 2017 | 12:33 AM - Posted by James

While it is great that you have the gear to measure the power draw of the card precisely, I still would like at the wall measurements. Nvidia's driver puts a much heavier load on the CPU, so you aren't getting quite the full story just measuring the power consumption of the card. I don't care too much about load power anyway, but it isn't that hard to get some at the wall measurements. I care a little bit more about idle power but electricity is cheap. I replaced three 75 watt incandescent bulbs in my kitchen with LED bulbs at 15 watts each. So I saved 180 watts for lights that are on for hours every day. I didn't notice any meaningful difference in my electricity bill.

April 20, 2017 | 08:05 AM - Posted by analogue

Apparently the MSI comes "over-voltaged".
It can actually run at 1393mhz at lower voltages, pulling less power!
https://youtu.be/MQ9ro5pwfXY

Also, it would be nice to test the benefits of chill.
Lets assume that a gamer with a freesync monitor with a refresh rate of 75hz doesn't gain anything in having a card pulling 100fps or 80fps. Assuming that the play experience would be the same if chill is looked at 75fps, how much would chill reduce the consumption?

November 7, 2017 | 12:06 PM - Posted by PhotM

Ryan & Jeremy,

I just got my Tower back from the shop with a RX 580 - 8 GB installed along with 2- 10 TB WD GOLD's and a complete checkup. It is a SandyBridge with Cougar chipset, i7 2600k CPU Intel Motherboard, never overclocked, 32 GB memory and 2- 256 GB 850 EVO's for the OS partitions of 3 and 4. The extra one is for a foyer into BSD/TrueOS. The other 6 are for a custom System Recover/Boot master, Main: W 8.1 Pro, Test: W 8.1 Pro, WIP Fast: W 10.0 Pro latest Build and WIP Preview up to date 1703, all x64. The PC is fast and unencumbered by by post Broadwell shenanigans or UEFI/Smart Boot.

I really don't like playing computer games, if anything I prefer to watch others play, but I do love Trains so I am into Dovetail's Train Simulators'which is where it became necessary for the GPU Upgrade. The Radeon 6880 was just not cutting it anymore.

Have I missed something? I watch you and Jeremy and the boys on my 55" HDR Sony TV every week and read your Blogs, so I was shocked to see the lack of W 8.1 x64 Drivers. I see W 7 and W 10, which I have justified to over W 7. I do not recall anything mentioned about this limitation, WTF?

Is there any guidance you or the boys can give me in this matter? I am still setting things up but I have managed to get the April drivers in, in place of the MS Generic which was a Bear to accomplish compared to yesteryear! I tried out the 2 DT Sims very briefly last night and the was a fanominal difference to the graphic.

Best Regards,

Crysta

November 7, 2017 | 01:41 PM - Posted by Jeremy Hellstrom

Do you mean the graphics card driver?  They can be found here http://support.amd.com/en-us/download

 

November 7, 2017 | 04:31 PM - Posted by PhotM

Hi Jeremy,

Did you look at that page closely? I have been looking there and on MSI, they are both the same:

Radeon™ RX 500 Series​​
Windows 10 ​​(​64-bit)​​​​
Windows 7 (64-bit)
RHEL / Ubuntu
Latest Wi​ndows Optional Driver

No W 8.1 x64!!!

Thanks for responding however,

I also gather, this is the first you have heard of this conundrum too.

Best Regards,

Crysta

November 9, 2017 | 12:22 AM - Posted by PhotM

Jermey<

I have found a SOLUTION, and it is not to commit to W 10.

I have gone to my WIP Skip Ahead W 10.0 Pro x64 partition and the AMD/Radeon Card automatics kicked in immediately and Installation Completed successful. I still had some clean uninstalling of the old suite but surprisingly little. I reinstalled from the Install package again,just to be sure, hahaha NO, because the automatics never install the software in "Program Files".

Then I went back to my W 8.1 partition and continued trying to get the W 10 drivers in, looking in FAQ/Knowledge and Forum. All were absent of of anything about W 8.1... a complete NO GO! I tried RX 4xx Drivers/suite NO GO.

I then resorted to W 7 Drivers/Suite. Much to my absolute amazement, Everythinng Installed as smooth as glass. ... GO FIGURE!

Best Regards,

Crysta

March 14, 2018 | 12:05 AM - Posted by baadkiid (not verified)

file:///C:/Users/blpri/Pictures/Saved%20Pictures/Unigine_Heaven_Benchmar...

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.