Review Index:

The AMD Radeon RX 480 Review - The Polaris Promise

Manufacturer: AMD

Polaris 10 Specifications

It would be hard at this point to NOT know about the Radeon RX 480 graphics card. AMD and the Radeon Technologies Group has been talking publicly about the Polaris architecture since December of 2015 with lofty ambitions. In the precarious position that the company rests, being well behind in market share and struggling to compete with the dominant player in the market (NVIDIA), the team was willing to sacrifice sales of current generation parts (300-series) in order to excite the user base for the upcoming move to Polaris. It is a risky bet and one that will play out over the next few months in the market.

View Full Size

Since then AMD continued to release bits of information at a time. First there were details on the new display support, then information about the 14nm process technology advantages. We then saw demos of working silicon at CES with targeted form factors and then at events in Macau, showed press the full details and architecture. At Computex they announced rough performance metrics and a price point. Finally, at E3, AMD discussed the RX 460 and RX 470 cousins and the release date of…today. It’s been quite a whirlwind.

Today the rubber meets the road: is the Radeon RX 480 the groundbreaking and stunning graphics card that we have been promised? Or does it struggle again to keep up with the behemoth that is NVIDIA’s GeForce product line? AMD’s marketing team would have you believe that the RX 480 is the start of some kind of graphics revolution – but will the coup be successful?

Join us for our second major graphics architecture release of the summer and learn for yourself if the Radeon RX 480 is your next GPU.

Continue reading our review of the AMD Radeon RX 480 8GB Graphics Card!!

Polaris 10 – Radeon RX 480 Specifications

First things first, let’s see how the raw specifications of the RX 480 compare to other AMD and NVIDIA products.

  RX 480 R9 390 R9 380 GTX 980 GTX 970 GTX 960 R9 Nano GTX 1070
GPU Polaris 10 Grenada Tonga GM204 GM204 GM206 Fiji XT GP104
GPU Cores 2304 2560 1792 2048 1664 1024 4096 1920
Rated Clock 1120 MHz 1000 MHz 970 MHz 1126 MHz 1050 MHz 1126 MHz up to 1000 MHz 1506 MHz
Texture Units 144 160 112 128 104 64 256 120
ROP Units 32 64 32 64 56 32 64 64
Memory 4GB
Memory Clock 7000 MHz
8000 MHz
6000 MHz 5700 MHz 7000 MHz 7000 MHz 7000 MHz 500 MHz 8000 MHz
Memory Interface 256-bit 512-bit 256-bit 256-bit 256-bit 128-bit 4096-bit (HBM) 256-bit
Memory Bandwidth 224 GB/s
256 GB/s
384 GB/s 182.4 GB/s 224 GB/s 196 GB/s 112 GB/s 512 GB/s 256 GB/s
TDP 150 watts 275 watts 190 watts 165 watts 145 watts 120 watts 275 watts 150 watts
Peak Compute 5.1 TFLOPS 5.1 TFLOPS 3.48 TFLOPS 4.61 TFLOPS 3.4 TFLOPS 2.3 TFLOPS 8.19 TFLOPS 5.7 TFLOPS
Transistor Count 5.7B 6.2B 5.0B 5.2B 5.2B 2.94B 8.9B 7.2B
Process Tech 14nm 28nm 28nm 28nm 28nm 28nm 28nm 16nm
MSRP (current) $199 $299 $199 $379 $329 $279 $499 $379

A lot of this data was given to us earlier in the month at the products official unveiling at Computex, but it is interesting to see it the context of other hardware on the market today. The Radeon RX 480 has 36 CUs with 2304 stream processors, coming in at a count between the Radeon R9 380 and the R9 390. But you also must consider the clock speeds thanks to production on the 14nm process node at Global Foundries. While the R9 390 ran at just 1000 MHz, the new RX 480 will have a “base” clock speed of 1120 MHz and a “boost” clock speed of 1266 MHz. I put those in quotes for a reason – we’ll discuss that important note below.

The immediate comparison to NVIDIA’s GTX 1070 and GTX 1080 clock speeds will happen, even though the pricing on them puts them in a very different class of product. AMD is only able to run the Polaris GPUs at 1266 MHz while the GTX 1080 hits a 1733 MHz Boost clock, and difference of 36%. That is substantial, and even though we know that you can’t directly compare the clock speeds of differing architectures, there has to be some debate as to why the move from 28nm to 14nm (Global Foundries) does not result in the same immediate clock speed advantages that NVIDIA saw moving from 28nm to 16nm (TSMC). We knew that AMD and NVIDIA were going to be building competing GPUs on different process technologies for the first time in modern PC gaming history and we knew that would likely result in some delta, I just did not expect it to be this wide. Is it issues with Global Foundries or with AMD’s GCN architecture? Hard to tell and neither party in this relationship is willing to tell us much on the issue. For now.

View Full Size

At the boost clock speed of 1266 MHz, the RX 480 is capable of 5.8 TFLOPS of compute, running well past the 5.1 TFLOPS of the Radeon R9 390 and getting close to the Radeon R9 390X, both of which are based on the Hawaii/Grenada chips. There are obviously some efficiency and performance improvements in the CUs themselves with Polaris, as I will note below, but much as we saw with NVIDIA’s Pascal architecture, the fundamental design remains the same coming from the 28nm generation. But as you will soon see in our performance testing, the RX 480 doesn’t really overtake the R9 390 consistently – but why not?

With Polaris AMD is getting into the game of variable clock speeds on its GPUs. When NVIDIA introduced GPU Boost with the GTX 680 cards in 2012, it was able to improve relative gaming performance of its products dramatically. AMD attempted to follow suit with cards that could scale by 50 MHz or so, but in reality the dynamic clocking nature of its product never acted as we expect. They just kind of ran at top speed, all of the time; which sounds great but it defeats the purpose of the technology. Polaris gets it right this time, though with some changes in branding.

For the RX 480, the “base” clock of 1120 MHz is not its minimum clock while gaming, but instead is an “average” expected clock speed that the GPU will run at, computed by AMD in a mix of games, resolutions and synthetic tests. The “boost” clock of 1266 MHz is actually the maximum clock speed of the GPU (without overclocking) and its highest voltage state. Contrast this with what NVIDIA does with base and boost clocks: base is the minimum clock rate that they guarantee the GPU will not run under in a real-world gaming scenario while boost is the “typical” or “average” clock you should expect to see in games in a “typical” chassis environment.

The differences are subtle but important. AMD is advertising the base clock as the frequency you might should see in gaming, while the boost clock is the frequency that you would hit in the absolute best case scenario (good case cooling, good quality ASIC, etc.) NVIDIA doesn’t publicize that maximum clock at all. There is a “bottom base” clock at which the RX 480 should not go under (and in which the fan will increase speed to make sure it doesn’t go under in any circumstance) but it is hidden in the WattMan overclocking utility as the “Min Acoustic Limit” and was set at 910 MHz on my sample.

That is a lot of discussion around clock speeds but I thought it was important to get that information out in the open before diving into anything else. AMD clearly wanted to be able to claim higher clock rates on its marketing and product lines than it might have been able had it followed in NVIDIA’s direction exactly. We’ll see how that plays out in our testing – but as you might insinuate based on my wording above, this is why the card doesn’t bolt past the Radeon R9 390 in our testing.

View Full Size

The RX 480 sample we received in for testing was configured with 8GB of memory running at 8 Gbps / GHz for a total memory bandwidth throughput of 256 GB/s. That’s pretty damned impressive – a 256-bit GDDR5 memory bus is running at 8.0 GHz to get us those numbers, matching the performance of the 256-bit bus on the GeForce GTX 1070. There are some more caveats here though. The 4GB reference model of the Radeon RX 480 will ship with GDDR5 memory running 7.0 GHz, for a bandwidth of 224 GB/s - still very reasonable considering the price point. AMD tells me that while the 8GB reference models will ship with 8 Gbps memory, most of the partner cards may not and instead will fall in the range of 7 Gbps to 8 Gbps. It was an odd conversation to be frank; AMD basically was alluding to the fact that many of the custom built partner cards would default to something close to 7.8 Gbps rather than the 8 Gbps.

Another oddity – and one that may make enthusiast a bit more cheery – is that AMD only built a single reference design card for this release to cover both the 4GB and 8GB varieties. That means that the cards that go on sale today listed at 4GB models will actually have 8GB of memory on them! With half of the DRAM partially disabled, it seems likely that someone soon will find a way to share a VBIOS to enable the additional VRAM. In fact, AMD provided me with a 4GB and an 8GB VBIOS for testing purpose. It’s a cost saving measure on AMD’s part – this way they only have to validate and build a single PCB.

The 150 watt TDP is another one of those data points we have known about for a while, but it is still impressive when compared to the R9 380 and R9 390 cards that use 190 watts and 275 watts (!!) respectively. Even if the move to 14nm didn’t result in clock speeds as impressively high as we saw with Pascal, Polaris definitely gets a dramatic jump in efficiency that allows the RX 480 to compete very well with a card from AMD’s own previous generation that used nearly 2x the power!

You get all of this with a new Radeon RX 480 for just $199 for the 4GB model or $239 for the 8GB model. Though prices on the NVIDIA 900-series have been varying since the launch of the GeForce GTX 1080 and GTX 1070, you will find in our testing that NVIDIA just doesn’t have a competitive product at this time to match what AMD is releasing today.

June 29, 2016 | 09:10 AM - Posted by 1st (not verified)

It's not quite what I expected- but I'm okay with it. Hopefully new drivers and OCing will push for 5-10% performance gains in the non factory cards.

June 29, 2016 | 10:20 AM - Posted by Anonymous (not verified)

This review is way off from a certain trusted 3D website that shows it on par with the GTX 980 at all times with the exception of that Tomb Raider garbage. hmmm

June 29, 2016 | 12:46 PM - Posted by Penteract (not verified)

Can't believe a word you say when you won't even name your "trusted 3D website".

June 29, 2016 | 01:17 PM - Posted by sneaky (not verified)

pcper is one of the best and most respected hardware sites out there (alongside techreport and computer base)

June 29, 2016 | 09:42 PM - Posted by Jeremy Hellstrom

June 30, 2016 | 02:39 AM - Posted by Anonymous (not verified)

saw who posted this.. made my day!

June 30, 2016 | 12:21 PM - Posted by Anonymous (not verified)

Relay? I distinctly renumber PC per reporting Polaris being delayed because it cant hit above 850 MHz. That kind of quality reporting sure builds respect.

June 30, 2016 | 10:36 AM - Posted by s (not verified)

U should really take a look at the benchemarks out on youtube,it doesnt even beat a 970 in all the games,let alone a 980.

June 30, 2016 | 10:58 AM - Posted by ITGamerGuy


I think that unlike the green team, with proper cooling, we may be able to see higher clocks. I like what RTG is doing and if you leave everything on Auto, for most gamer's, it should be OK

ATI has always been about dam temps and power draw, give me power!, which I have always been about. I don't mind 80% fan speed, that is what closed back headphones are for =)

I still feel the possibility for upping performance, in this architecture. What remains to be seen is what the AIB partners do with this chip. I am hopeful that with proper cooling and what I have been seeing from reviews, is that there is headroom to get higher clocks.

June 29, 2016 | 09:12 AM - Posted by James (not verified)

A very welcome return to form from AMD, this will be a huge seller by all accounts. At the current price point there is no competition from AMD.

Importantly it sets a new performance threshold that is very positive for gamers who can't justify the 1070 cost of entry. A very solid release indeed.

Any idea when the 470/460 reviews will be coming. I'm really looking forward to see what performace can be had from a GPU powered only by the PCIE bus.

June 29, 2016 | 09:13 AM - Posted by James (not verified)

that should read 'competition for AMD'

June 29, 2016 | 01:08 PM - Posted by Penteract (not verified)

The RX 480 comes in at an excellent price and sets a new standard for its relative performance level. Even if you were leaning towards the added Nvidia technologies (PhysX, G-SYNC), the 970 is simply an older card that doesn't have some of the features current-gen cards have (also true with the last-gen AMD offerings). It would be, in my opinion, a bad decision to buy a GTX 970 over an RX 480, even if the price was identical.

But comparing it to a GTX 1070 and saying "can't justify the cost of entry" really makes no sense. Either you need the power for your intended usage that the 1070 provides or you don't. If you do, the RX 480 is a bad choice because it won't do what you need it to do. If you don't, the GTX 1070 is a waste of money. If you need an Nvidia card, at least wait until the GTX 1060 comes out and hope it will have a similar price/performance.

June 30, 2016 | 03:23 PM - Posted by Eric (not verified)

I am wanting to do 4K, I am not picky on frame rates, things don't have to be 60+ all the time, I don't need every setting sky high (especially can't imagine needing AA at 4K), but I haven't seen any reviews so far with any games at 4K.

June 29, 2016 | 09:19 AM - Posted by Anonymous (not verified)

And where the heck can we buy this? Of course Newegg is already showing out of stock!

June 29, 2016 | 12:36 PM - Posted by Anonymous (not verified)

I see the power color card in stock at newegg as of now.

June 29, 2016 | 09:22 AM - Posted by Anonymous (not verified)

But future drivers will fix everything, right? The secret sauce?

What a disappointment and it sure didn't help there was so much hype with false expectations. Partially for fanboy's and fud articles out there, but also AMD themselves.

Remember when they posted that one slide that said 2x480 would beat a single 1080? Yeah, right! Of course the smart one's could see past it.

June 29, 2016 | 01:04 PM - Posted by Adam Mellor (not verified)

lmao no they didnt show it beating the 1080 lmao are you blind. It was behind the 1080 but not by much

June 29, 2016 | 01:13 PM - Posted by Anonymous (not verified)

Pretty clear that they said 2x480 > gtx 1080

June 29, 2016 | 01:38 PM - Posted by Anonymous (not verified)

That was in AShes of the Singularity. That is in one DX12 game. AMD did not say that 2x480s would beat the 1080 in every game and scenario.

June 29, 2016 | 01:48 PM - Posted by arbiter

Yea they CLAIMED it in AOTS but looking at this review beating a gtx1070 looks like will be iffy in most games.

June 29, 2016 | 04:12 PM - Posted by Anonymous (not verified)

Between you and other websites(one major one) only doing/focusing on the DX11 benchmarks, folks can see where the money is going to FUD Up on AMD while spinning positive towards Nvidia.

Watch the websites that practice those lies of omission, by only testing on the older graphics APIs, and not even attempting any DX12/Vulkan games. So once the fully optimized DX12/Vulkan games are out then there can be more benchmarks done.

When all the RX 480 features are tweaked and more and better games make use of explicit GPU multi-adaptor and the new graphics APIs then 2 RX 480s may just be a very good deal in getting GTX 1080 levels of performance at a very nice RX 480 price savings(for even 2 RX 480s), and DX11 is not the way forward for gaming, as there is DX12/Vukan that are out and being developed for! And Programming of the Polaris HWS units is in microcode, so the HWS units can be re-programmed and their scheduling algorithms can be improved over time with new microcode/firmware updates.

At Least Charlie over at S/A is doing a point by point comparison of each of the Polaris execution Units' new feature tweaks/improvements, for shaders, tessellation, compression, sound, scheduling etc.

That Nvidia has more money to hire in astroturf land, and send those terfing squads out in force! Nvidia is sure making DX11 its focus still, but DX11 is now on the way out.

You arbiter are looking for that spin against AMD and for your green masters, including one other prominent Nvidia favoring poster on one tech website in particular going over to another prominent Linux OS/Linux test suite based testing website and spinning for Nvidia there.

June 30, 2016 | 02:49 AM - Posted by Anonymous (not verified)

DX12 has been tested, it's just that not many titles exist. Of the few that do, AotS may be heavily optimized towards AMD's RX-480 (thus not representative of most DX12 titles) and most of the others if not all only have some DX12 code tacked on.

DX11 is on the way out?
Sort of, but how many games will the average person have in their Steam library that use DX12 by the end of 2017?

In fact, how many DX12 titles will ship relative to DX11 in the next year?

It's not tough for me to recommend cards now though (aside from waiting for prices to get closer to MSRP). If you have about $260 then get an RX-480 8GB.

If you have $240 or so and can't swing any more get the 4GB RX-480. (after-market like Asus Strix or similar with 8-pin or 2x6-pin recommended for RX-480).

In the $400 plus it's simply GTX1070 or GTX1080, again once prices stabilize.

There's really no overlap, nor any great DX12 data except that you should avoid NVidia 900 series if your budget is in the RX-480 range.

July 1, 2016 | 12:09 PM - Posted by Anonymous (not verified)

"Sort of, but how many games will the average person have in their Steam library that use DX12 by the end of 2017?"

Not everyone who plays games uses Steam. It's terrible.

June 30, 2016 | 02:05 PM - Posted by Anonymous Nvidia User (not verified)

Directx 11 on the way out. LOL. Why are video cards backwards compatible to directx9. Blizzard still makes games directx 9/10 compatible such as StarCraft 2, Diablo 3 and World of Warcraft. They are a very profitable company. Directx 12 isn't viable yet with only 350 million user base for Windows 10 only. All other windows platforms number in a billion or two or more. So yeah there's that and other hurdles to jump. If Microsoft makes directx 12 available to Windows 7/8 users via patch then you can talk declining. Games take a few years to make sometimes and you're not going to scrap it and go with new directx right away and true directx 12 games are still a year or two away but still will have directx 11 version. When you see mostly only dx12 versions of games coming out you can make your statement confidently.

June 29, 2016 | 10:11 PM - Posted by Anonymous (not verified)

Yes, they said two RX 480s would beat one GTX 1080 in Ashes of the Singularity.

And they do.

June 29, 2016 | 09:24 AM - Posted by Anonymous (not verified)

Just buy a used 970 and forget you ever had any hope for the 480.

June 29, 2016 | 11:01 AM - Posted by Anonymous (not verified)

970 is considerably slower in DX12 titles than the 480

June 30, 2016 | 02:07 AM - Posted by Anonymous (not verified)

I'd rather drink a pint of my own diarrhea, on the hour, every hour than buy anything from nVidia.

July 1, 2016 | 03:06 PM - Posted by Anonymous (not verified)


July 4, 2016 | 09:35 AM - Posted by Anonymous Nvidia User (not verified)

He only has that much s**t because he's an extreme AMD fanboy.

June 29, 2016 | 09:27 AM - Posted by Anonymous (not verified)

"The immediate comparison to NVIDIA’s GTX 1070 and GTX 1080 clock speeds will happen, even though the pricing on them puts them in a very different class of product. AMD is only able to run the Polaris GPUs at 1266 MHz while the GTX 1080 hits a 1733 MHz Boost clock, and difference of 36%. That is substantial, and even though we know that you can’t directly compare the clock speeds of differing architectures, there has to be some debate as to why the move from 28nm to 14nm (Global Foundries) does not result in the same immediate clock speed advantages that NVIDIA saw moving from 28nm to 16nm (TSMC). We knew that AMD and NVIDIA were going to be building competing GPUs on different process technologies for the first time in modern PC gaming history and we knew that would likely result in some delta, I just did not expect it to be this wide. Is it issues with Global Foundries or with AMD’s GCN architecture? Hard to tell and neither party in this relationship is willing to tell us much on the issue. For now."

14nm is more densely packed than 16nm, and maybe AMD was going for more cores per die with a higher density design library process tweak to get more dies per wafer and price them lower to grab that mainstream market share. Maybe AMD went with more layers and a denser circuit structure that can not be clocked as high. But maybe with a little better cooling solution the part's clocks can be higher with the AIO coolers and the custom boards.

There are higher density designs/libraries variations even among the different GPU SKUs with some designs made to achieve smaller dies at the expense of higher clocking ability, for more dies per wafer and better pricing metrics. AMD will be able to with other designs have the circuit pitch increased or GF will have that 14nm high performance tweak with the licensed from Samsung 14nm process, Samsung is sure to be tweaking that 14nm process for higher performance in future offerings so GF can license any newer Samsung 14nm LPP processes.

June 29, 2016 | 09:46 AM - Posted by Anonymous (not verified)

"14nm is more densely packed than 16nm"

By only a marginal amount: these processes from TSMC and GloFo are a bit misleadingly named. They've picked one particular element of the patterning process they can perform exceptionally highly, and chosen that as their naming metric. In practical terms, they are almost identical in feature size, as can be seen from the Chipworks comparison of the two A9 dies.

June 29, 2016 | 10:45 AM - Posted by Anonymous (not verified)

Yes BUT GPUs use high density automated design/layout libraries and have more layers than the CPUs that utilize the low density automated design libraries that use less layers and pack transistors less densely to allow CPUs to be clocked higher. At 14nm and using higher density layout design automated libraries to achieve more massively parallel processing units per unit area, GPUs can not be clocked as high as CPUs! And there are even variations among the GPU style high density design libraries that allow some GPUs to be clocked higher than others!

The smaller a process node gets the less the numbers of substrate atoms per unit area there are to absorb the heat phonons and transfer these heat phonons efficiently to carry heat away from the transistors. Even Intel had problems at 14nm, but CPUs are laid out less dense by design, GPUs are made designed/laid out denser, and that means less heat can be allowed to be generated. When you cut a process node size by half(28nm to 14nm) the density goes up by 4 times and depending on the process node/circuit pitch and the overall layout (done by the automated design libraries/software) GPUs can be very densely packed and unable to be clocked very high. So the design engineers and the yield engineers along with the bean counters design the GPUs for the intended market, and the trade-offs are calculated well in advance of the final design being frozen and then certified and brought to market.

June 29, 2016 | 10:58 AM - Posted by Anonymous (not verified)

I forgot to add, AMDs CPUs have all the async compute implemented in their hardware so those extra circuits add to the heat budget, but increase the performance relative to any GPU designs the implement async compute in software/firmware and less in their hardware.

Why do you think that Nvidia went with 16nm and higher clocks to make up for some of that asycn-compute disadvantage that they have!

June 29, 2016 | 12:28 PM - Posted by Stefem (not verified)

Look at the power drown, a GTX 1070 consumes about the same as the RX 480, they went up to that level of clock simply because they were able to do that

June 29, 2016 | 04:24 PM - Posted by Anonymous (not verified)

Designing with high clock targets is a bit of a risk. Designs aimed at higher clock speed targets require deeper pipelines. The number of pipeline stages unfortunately can not be changed easily, so those decisions may have been made a very long time ago. Making the pipelines too deep can result in a huge amount of extra power consumption and so can having too high of clock speed target for your process technology. Nvidia certainly spent a lot of man hours doing optimization to optimize power and clock speed for the 1080/1070. That isn't that much of an issue for a $700 card, but it isn't exactly optimal for a mid-range $200 card. The 480 design can probably be tweaked a lot considering it is both a new design and a very new process. We might get a more optimized design later for a 480x or something with higher clock. I think the 480 represents a good value as is though, especially with DX12. I don't think Nvidia's DX12 hardware support is equivalent yet; AMD has worked on DX12 like hardware and drivers much longer due to their invention of Mantle, which is very close to DX12.

June 29, 2016 | 07:50 PM - Posted by Stefem (not verified)

Bah, maybe from the new process, the design is not a great departure from prior GCN iteration.
Also Polaris should be compared to Pascal before deciding how much good are, of coerce this will be possible only when NVIDIA release a card with a similar sized GPU

June 30, 2016 | 03:49 AM - Posted by Chris Telting (not verified)

This card is being sold wholesale. I think AMD is doing this to boost up their fab while gaining press and market share.

Developing the fab technologies is an ART. In order to ramp up production to produce cheaper you have to spend money in one way or another so that the fab can get the practice and expertise it needs in order to meet future targets.

One way is to produce parts and just trash them only to bundle in their cost to products that pass spec. The other is to sell lower spec parts in bulk.

AMD chose a lower frequency part that matches the previous generation from nVidia at a low price point to help their fab master the art of production. They are logically going where AMD always goes, to the value market where most of the money is to be made.

If AMD wanted they could easily produce a GTX1080 card and likely do have internal models. They would just cost twice to three times what nVidia charges.

The practice AMD gives their fab now will also help with their future CPU runs.

I want a GTX1080. What I can afford is two RX480's.

June 29, 2016 | 04:53 PM - Posted by Anonymous (not verified)

Without the proper testing mule there is no way of Knowing if that is the GPU/die itself or the card's other power drawing features, so it may not be any of GF 14nm process node's standard performance to blame. And remember AMD's async compute is going to be keeping more execution resources running with few remaining idle because of better scheduling, so expect more power usage simply because the execution units are being fully utilized. That smaller 14nm die is going to get hotter, and if the testing SKUs is of a reference design the cooling may not be there to stop some of the power/heat related extra power draw related issues.

Hot circuits have higher leakage and if the cooling solution is not on top of things then the heat feeds back into the circuits causing more leakage leading to more heat. It's a vicious feedback cycle sort of thing.

How much larger is the 1070's die. Is the 1070 a binned 1080 part with some units disabled, and it has more dead/unpowered silicon available with which to absorb and dissipate the heat relative to a RX 480's die/die size. The RX 480 is on 14nm node is about 14% smaller than the 16nm node and at 14nm is more densely packed over the same unit area. And what is the circuit pitch on TSMC 16nm node compared to GF 's 14nm node. The RX 480's die is much smaller so less heat transference can happen over a unit of time compared to a larger die on an larger process node.

Maybe the AIB RX480 boards will have better cooling, more testing needs to be done. More testing needs to be done with DX12/Vulkan full optimized games, and maybe there needs to be some driver tweaks over the next few weeks also. it's a brand new card so teething problems come with any new GPU release.

June 30, 2016 | 03:00 AM - Posted by Anonymous (not verified)

AMD answered this at PCPER.

The GPU in the RX-480 was started almost three years ago. They had MOBILE customers in mind for launch. Then VR came along and they needed something to compete.

AMD was forced to increase the frequency for a design not optimized for that to get to the VR Ready target which they just BARELY did.

What's nice about this admission is that it's likely VEGA has been designed with higher frequencies so should be closer to NVidia's frequencies on GTX1080.

June 30, 2016 | 08:58 AM - Posted by Stefem (not verified)

I don't think they pushed the frequency to keep up with VR, quite clearly their main concern was to be able to compete against (yet old) NVIDIA's product

June 30, 2016 | 10:09 AM - Posted by Anonymous (not verified)

Calling competing is not the right term here. They wanted to reclaim a big portion of the market share they have lost in the past few years; as well as build hype for their future cards. By putting out such a huge value card they are cementing their name out there again and grabbing the biggest part of the market share. Contrary to what everyone thinks the flagship cards make up very little of the market share as a whole. For a sub 250 dollar card there is no comparison for the 480, it's a no brainer, a complete wash, not a competition. Nobody in their right mind would buy anything but a 480 if they have sub 250 dollars right now for a video card.

June 29, 2016 | 09:27 AM - Posted by mwave (not verified)

yep depends what you wont out off a graphics card, for me vr bear minimum will not do, my 970gtx which i have had for a over a year is not enough for demanding games in vr, need more POWER, not for me.

June 29, 2016 | 09:31 AM - Posted by Anonymous (not verified)

something's not right with the "Polaris 10 – Radeon RX 480 Specifications" table it says that R9 380 has three times more ROPs than R9 390 and it also states that RX480 is on 16nm

June 29, 2016 | 09:40 AM - Posted by Ryan Shrout


June 29, 2016 | 09:36 AM - Posted by darthrevna43 (not verified)

So it has about the same power target as a Nvidia GTX 970 and performance wise it's trading blows with it usually coming out on top. Given that there are massive discounts now on the 970, i even got mine for 220 euros 3 weeks ago, I fail to see how AMD can win the graphics market. Sure, it has DP 1.3 & 1.4 but for most people that will not matter as much as the Nvidia branding which is on most people's mind for the last year.

I really wanted AMD to have a win here but I fail to see how they can conquer the market for the time being. Maybe I'm not seeing something but in my eyes this is a flop in terms of excitement in the way it's positioned now.

June 29, 2016 | 11:44 AM - Posted by Anonymous (not verified)

It will win with the VR games/newer games using async compute, and the RX480 has more async-compute future proofing ahead. There will be more DX12/Vulkan fully optimized titles putting more gaming compute onto the GPUs in the future, so look for those user's systems that have less powerful CPUs to benefit even more from Polaris. Let's start testing the games with weaker CPUs, and as time goes on and the games become able(Via DX12/Vulkan) to do more on the GPU gaming compute acceleration on async compute enabled Polaris GPUs, let's see how the RX480 improves with time. Those GTX970/Maxwell GPUs are not going to do well on future Fully DX12/Vulkan enabled game titles. Let's test for 6 months to a year and see with all manner of CPU based SKUs and with the RX480 and GTX970, Nvidia can not program their way out that async compute deficiency in their GPU's hardware.

June 29, 2016 | 09:38 AM - Posted by Panu Roivas (not verified)

This is merely "OK", not the big win I was hoping for AMD. It's certainly the card to get right now at 200-250$, but it probably won't be after the 1060 launches.

It looks to me that either GloFo 14 nm is very limited, or GCN just doesn't have the legs to clock much past this without considerable changes to the architecture. Perhaps if Vega is done on TSMC we'll have something to compare to, but it's alarming that Nvidia can get 2 GHz clocks and AMD's chips have a hard time reaching 1.3 Ghz.

Doesn't look too good in terms of AMD gaining much market share back with this. Given AMD's resources, perhaps it's wishful thinking that they could surpass their competitors at this point.

June 29, 2016 | 11:39 AM - Posted by quest4glory

Basically, anyone who doesn't have a G-Sync monitor or depend upon Nvidia proprietary software such as Shadowplay would be a foolish person for spending more or the same money on a 970.

June 29, 2016 | 06:46 PM - Posted by Stefem (not verified)

Better to wait some week then

June 29, 2016 | 09:39 AM - Posted by Anonymous (not verified)

I would like to know what happened to the RX460 and when it will be released?

June 29, 2016 | 09:42 AM - Posted by Anonymous (not verified)

AMD needs 14nm process to match what nvidia did with 28nm two years ago? Where I live the requested prices of RX480 seem to be on par with 970 custom designs. So great job AMD you have done well... Trololollololooo.

Sincerely, the Internet. (some of us liked your ads though - We give you that much.).

June 29, 2016 | 09:42 AM - Posted by Anonymous (not verified)


Getting one 8GB as soon as they are on sale in my country! :D

June 29, 2016 | 09:45 AM - Posted by Abi Hakim (not verified)

Is GTX 970 on stock?
Because that's tad low.
I own both R9 290x and GTX 970.
In witcher 3, GTX 970 butchered R9 290x by 10-15fps on 1080p.

Honestly with these price point, it's quite a good upgrade for the people who own a card that is LESS powerful than a R9 290 or GTX 970. Remember his is $199 card folks, dont get your expectations to high or you'll wind up with a broken heart :D

As for me, i hope Vega won't dissapoint next!

June 29, 2016 | 12:33 PM - Posted by Stefem (not verified)

Yep, the GTX 970 is at stock clock, you can read clock speeds of each card in every chart

June 30, 2016 | 02:18 AM - Posted by Anonymous (not verified)


I'll be slapping one of these in to replace my venerable 7970-ghz. Its such a steal, I'd be crazy not to.
Later I'll buy an AIB vega part when those are available, since things are looking good for vega. 2nd Gen GLOFO 14nm, and a shitload of ROPs/CUs. Bring it AMD!

June 29, 2016 | 09:46 AM - Posted by p0tilas

Would have been a cool card a year ago...

June 29, 2016 | 04:41 PM - Posted by Bezzell

You nailed it.

June 29, 2016 | 09:51 AM - Posted by Anonymous (not verified)

June 29, 2016 | 10:08 AM - Posted by Anonymous Nvidia User (not verified)

True to my word. Barely better than a stock 970. A partner 970 or overclocker's reference one will beat it comfortably for comparable or less wattage. ROFL. Who was right anonymous? You can suck one. Must cite source. I can give you can bunch if you want but we both know it isn't necessary. I'm not the a**hole Nvidia fanboy you claim I was. It is everything I said it would be except possibly worse. It's a good value if you only look at price. In fact I feel sorry for AMD. It is what it is. Maybe I should change my user name to RX 480 1080 gtx killer not. Hype was over top for this. Who told everyone not to get excited over it. If you're an AMD fanboy this is a good buy for you. Everyone else not so much. I should rub it in more but Nvidia might raise the prices even more because of this fail.

June 29, 2016 | 12:15 PM - Posted by Anonymous (not verified)

Anyone who declares a winner this early in the game with the new DX12/Vulkan fully enabled titles on the way, and the benchmarking software still needing to fully catch up with the New gaming/graphics API ecosystems, is truly an egregious fool, or a paid astroturfer, or in your case both!

Never trust any reviewer who declares an ultimate winner this early in the competition, especially with the new DX12/Vulkan APIs, games, gaming engines, and benchmarking software needing the time to fully be able to properly adapt and measure the hardware/games after all this rapid software/hardware technological change in such a short amount of time.

June 29, 2016 | 01:10 PM - Posted by Anonymous Nvidia User (not verified)

That's rich. I'm paid by no one. I wish I could start my own tech site though. I call it like I see it. I've been involved in PC gaming since 1990. I've seen quite a bit and maybe know a thing or too. About time any volume of dx12/Vulcan games hit the market, these cards will be obsolete as Nvidia will have designed a card that well supports these Api's. That's assuming dx12 doesn't die from lack of support. No company is going to risk financial livelihood by only making dx12 version of game. I notice you don't mention openGL which is as viable but is it because AMD cards don't do it as well?

June 29, 2016 | 04:44 PM - Posted by Anonymous (not verified)

So you would rather be stuck with the underperforming DX11 version of games for the next 2 years? Regardless of budget, I wouldn't buy any Nvidia card right now; they are behind in hardware support. Although, I consider paying any more than about $300 for a video card to be foolish. DX11 will die quickly. DX12 is here to stay and will be adopted very quickly due to the console market. Both major consoles can support DX12 titles, and DX12 is strongly favored due to the improved multi-threading capabilities. Both have low power 8-core CPUs. I hope you aren't recommending to your friends that they buy obsolete Nvidia cards.

June 30, 2016 | 06:23 PM - Posted by Anonymous Nvidia User (not verified)

I sure don't want the larger power draws of dx12 either. Basically the CPUs overhead is decreased a little and wattage drops a little. However, the GPU consumption increases beyond what the CPU loses and this is with or without Asynchronous compute enabled. This should not be happening. A lot of sites tout CPU wattage can drop 50% which is nothing compared to a video card wattage increase which these sites don't tell you about. For the little performance gain I'd rather not have directx 12 or asynchronous compute.

Or consider this one. Basically huge gains from directx 12 can be had if you have a weak dual or quad core processor.

I know both of these feature AOTS but it's the defacto AMD standard bearer for directx 12. Guess you're stuck with it. Maybe AOTS isn't coded well and shouldn't be considered a valid benchmark.

June 29, 2016 | 11:02 PM - Posted by Anonymous (not verified)

Awww, yaaaay! The little fanboy liar is back! I'm so glad to see you, I missed slapping you around the comments.

And you've already started right in with the lies! "I'm not the a**hole Nvidia fanboy you claim I was." And yet every comment you've made since you came back proves that, yes, you really are.

Now, in that big long comment thread a week ago where you consistently embarrassed yourself and didn't even know it, the only claim you made about the RX 480 was that two of them in Crossfire wouldn't come close to a single 1080. But, wouldn't ya know it, they do a pretty decent job keeping up at least. And the difference in price is bigger than the difference in performance. Oh, and those are reference cards, too. A pair of AIB cards with significantly better cooling and power delivery will only get better.

No, two reference RX 480s do not beat a single 1080, as things stand right this minute. (See what I did there? That's called "accepting the facts". I hope you learn to do that soon, because you're wrong REALLY OFTEN.)

You're still wrong about your claim that Freesync only working up to 90Hz, and so far everything you've shown to support that claim has been on one 144Hz monitor with a Freesync range up to 90Hz. Show me something that doesn't rely on the Asus MG279Q that supports your claim and we'll talk.

Then there was this gem: "There is a lot bigger difference between AMD review samples and retail samples than 1-1.5%." And then you pointed to two completely different review sites, using two completely different hardware platforms, and referenced the overall FireStrike scores instead of the graphics scores, I guess because you somehow thought that was proof enough.

What about your claim that buying the less expensive AMD card would wind up being more expensive once you include the added power cost on one's electric bill? Hint - it would take about 18 to 20 years to make up the difference on one's electric bill. Are you going to admit you were wrong about that? (No, of course you're not.)

Remember saying this? "Do you really think a $200 video card is going to clock at 1500 mhz when their Furyx enthusiast card maybe could reach what 1150-1200 mhz. A 380 core clock is 970 mhz. The base of Rx 480 is presumed to be 1266 mhz is 30% boost and 1500 mhz is 55% boost. Doesn't seem too likely."

We'll see when the AIB cards come out - if some of them can get over 1500MHz, will you admit you were wrong? (Or will you turn around and cling to, "nuh uh, that's not $200!" instead?)

I also remember this little nugget of cowpie: "AMD fanboys use all the excuses in the world. I've read so much BS from them such as I still get 60fps with my ancient HD (insert model number of your choice). No need to upgrade yet. Well you fanboys are the reason AMD is in such dire straits. Buy a product once in a while instead of bragging."

Amusing that someone trying to power a 4k monitor with his GTX 760 would think his own argument didn't apply to him. I'm shocked, I tell you. Shocked.

Oh, don't forget about this one: "Nvidia cards have more built in limiters and protections in their cards versus AMD." I'm still hoping you're actually going to talk about what "limiters and protections" Nvidia has that AMD doesn't have. Because it sounds more like a claim that you pulled out of your bottom and stated it as a fact in the hopes that everyone would think you knew what you were talking about and accept it. I think you made it up.

In fact, I know you made it up. You said so yourself - you were "assuming" that AMD's marginally higher failure rate was "probably" related to temps and "possibly" less protections in place. Just admit that it was just your fanboy bullshit and move on. Clinging to a lie when you've been proven wrong is just sad.

Oh I'm so glad you're back, princess. I'm gonna have so much fun with you.

June 30, 2016 | 12:00 PM - Posted by Batismul (not verified)

If you call this "fun" you're a very sad pathetic individual

June 30, 2016 | 12:57 PM - Posted by Anonymous Nvidia User (not verified)

Yeah you haven't proven me wrong either with facts and cites. Yours is largely opinion as well. You cherry pick things and a lot of stuff I posted was right but you didn't address them at all. Such as Radeon's horrible power consumption in 1080 video playback. You glossed over most and picked a few out and said I was only right about maybe one thing. OK. I could post more but don't really care to.

About freesync you may be right but about it going up to 144hz but it's most effective range is between 40hz to 90hz meaning it performs best here. Yes it's supported range can be 9 hz to 200hz? but thus far only goes down to about 30hz. I don't think either freesync or gsync have to do much beyond 90 hz because you need one or two powerful cards to tap this.

You are entitled to your opinion as well. Is the going over to spec of PCI express power limits fake then? AMD has admitted to the problem already. I think this is more serious than Nvidia's dvi doesn't work above certain range for overclockable Korean monitors. So I'm an ahole for trying to spare AMD fanboys from harming their systems. OK.

About review samples versus retail there isn't much to go on as most tech sites do not buy retail samples to compare to the press samples so no surprise won't find much. It was disappointed purchasers who said they didn't get anywhere near the numbers reviewers get. Although may be due in part that the Furyx performs better with a stronger CPU $1000+ that most people don't have. But shopping a few cards around to the reviewers because of "limited" supply, doesn't look on the up and up. As far as I know Nvidia hasn't done this but both companies probably cherry pick cards for review.

Electricity costs about .07 where I live for 1 kilowatt hour. This is cheap compared to Europe and other places. A difference of only 60 watts of power over 18 years at 8 hrs a day is going to cost $220 dollars more assuming rate holds same which it won't. It's around $12.25 a year more. Want me to prove my math. A comparable Nvidia card don't cost that much more. LOL. It's $37 to $61 during a card's average lifetime of 3 to 5 years. More time playing or way higher rate as well as increase in rate over time effects this. Maybe drastically. Wattage of comparable AMD to Nvidia can go 100 watts+ more for a few more frames more or with asynchronous compute enabled. Is asynchronous compute "free" performance then. If it's worth it to you for the average increase of 5-10% then OK but it isn't free. Another point you glossed over. So do you exaggerate much.

Exactly. The $239 (8GB) reference hits what near 1400 mhz so far at best. The $200 (4GB) model is supposed to be weaker with slower RAM. Not confirmed yet however.

As for the 760gtx, I bought that when it was new 3 years ago and I bought my 4k monitor less than 6 months ago. I gave my old system away to a needy person at work and gave my 1080 monitor to him as well. My card will do 4k on higher details than the console does with 720-900p at least 30 frames. It's usually well over 30fps but doesn't hit 60 unless I compromise settings a bit. Only exception is AC Unity, which I get same frame rate at higher settings as it uses my card's entire 4 gigs even at 1080. Older games of course. I always have option to play at 1440 or 1080 as well. The monitor is future looking and has amazing detail as well. I am also looking to upgrade my card to play newer games better as well. No rush it's adequate for now.

More may have been a less than optimal choice of a word. Maybe it implied number. I should have said "better" protection.

What do you presume the higher failure rate correlates to. Maybe cheaper parts on a cheaper quality video card from a deep in debt company possibly cutting corners. You usually get what you pay for. I say that because it could be the reason or something else entirely. I'm not an engineer. You are entitled to your opinion as well. Lie is a strong word tell me 100% I'm wrong because you know the real reason. I'm waiting for your proof. Get real.

Resorting to childish name calling. Who is the one having fun? I'll just have to use better word selection and prove the littlest things to your nitpicking.

June 29, 2016 | 10:15 AM - Posted by Anonymous (not verified)

"Microsoft’s Chas Boyd was on-hand at AMD’s editor’s day for Polaris and previewed ideas that MS is implementing to help improve multi-GPU scaling. The best news surrounded a new abstraction layer for game developers to utilize for multiple GPU support that MS would be releasing on GitHub very shortly. According to Microsoft, with only “very little” code adjustment, DX12 titles should be able to implement basic multi-GPU support."

The way to go is with the DX12, and Vulkan(More Open than DX12), graphics APIs in charge of multi-GPU load balancing. Let this multi-adaptor be done in the graphics APIs/OSs and more folks involved in creating the Multi-graphics-adaptor load balancing algorithms. This is the way multi-GPU load balancing and support should have been done in the first place, with the OS/Graphics APIs in charge of sending the work to the GPU/s of any make or model that are plugged into any PC/laptop or other computing device.

Keep the hardware drivers simple and close to the metal, and move most of the milti-GPU load balancing support into the graphics APIs/OS, and standardize the way that a computing system accesses its available processing resources. Vulkan lets the GPU ODMs register extensions to the API to handle any new feature sets with the graphics API more in charge of the workloads given to each GPU/processor, and its probably the same for DX12. As far as load balancing between multi-processors GPUs, CPUs and others, it's better to have the entire computing industry in on developing the load balancing algorithms for GPU/ other processor multi-adaptor load balancing, instead of just the companies that make the GPU/Other processor hardware.

So M$ is releasing some middleware, but I'm more in favor of standardizing things more formally in the Graphics APIs and in the OSs, for the proper management of any processing hardware installed on a computing system.

June 29, 2016 | 10:23 AM - Posted by Anonymous (not verified)

not amazing, even a little disappointing considering it's 14nm 2300SPs and it's pretty close to the 970 (28nm 1660SPs) at almost everything (power usage and performance)
also the 970 overclocks like a champ and will leave the RX480 (which doesn't overclock) far behind...

BUT as a $200 card it's pretty decent, huge improvement over the old stuff like the 960 for sure...
also as a first GPU at GloFo, looks pretty OK I guess? it can only get better!?

anyway, I hope the best for AMD, and looking forward for the 470 and 460, since those would probably be more suitable for me.

June 29, 2016 | 10:24 AM - Posted by area man (not verified)

Dirty Rally – 1920x1080 – Frame Variance

The R9 390 still takes a commanding lead, running 16% faster than the new Polaris 10 GPU, but obviously at lower power.

* higher power consumption

June 29, 2016 | 10:45 AM - Posted by Anonymous (not verified)

Before even coming to PCPER, I already knew their review would do everything in its power to make AMD look bad and nvidia look good...PCPER is, afterall, so nvidia biased it's ridiculous...

June 29, 2016 | 10:54 AM - Posted by Anonymous (not verified)

What are you babbling about. The numbers are the numbers. The card is good. They said it was good. You are insane.

June 29, 2016 | 10:56 AM - Posted by Stefem (not verified)

Yea, all against AMD, that's why Raja decided to be interviewed there...

You are making yourself ridiculous, if PCper wanted to be bad they would have simply put a GTX 1070 in the review and not awarded the gold award...

June 29, 2016 | 11:02 AM - Posted by Anonymous (not verified)

you are a shame for all the people named Anonymous!

the review was mostly positive and matches what the other sites are saying, and the Raja is coming, deal with it!

June 29, 2016 | 11:02 AM - Posted by J Nevins (not verified)

Well, still happy with my 390X card bought over a year ago. Disappointing that PCPER never actually reviewed a 390X.

June 29, 2016 | 11:10 AM - Posted by Stefem (not verified)

Hey Rayan, nice review, it's pity there is not a complete VR comparison especially since the card was advertised for that use by AMD.
I can understand why there isn't a GTX 1070 but why put a GTX 960 with only 2GB? I see that on amazon it cost 10$ more and actually the very same with rebate. I ask this because in some games the difference (both in performance and frametime) between GTX 970 and 960 looks a bit weird and was wondering if could be due to video memory saturation.

June 29, 2016 | 11:12 AM - Posted by Stefem (not verified)

Sorry, mistyped your name Ryan :)

June 29, 2016 | 11:32 AM - Posted by failedjedi

This isn't a bad a card for the price. Granted once you get to high end/enthusiasts it isn't a great option, but for budget/midrange/casual gamers, it's a hell of a choice.

June 29, 2016 | 11:42 AM - Posted by requies (not verified)

Look at this objectively. With this performance level and the advantages of a-sync compute, this becomes the first recommendation to upgrade your rig for VR. For the price, this targets what is 80+% of the actual card market and for the price, offers a massive performance ability for 1080p which is still (BY A HUGE MARGIN) the largest gaming segment.

For a company who needs a bottom line win for revenue and long term sustainability, this is a homerun.

June 29, 2016 | 12:07 PM - Posted by Stefem (not verified)

That's looks a typical "reckon without one's host"

You'd expect its main competitor will overlook this segment?

June 29, 2016 | 11:55 AM - Posted by Anonymous (not verified)

I expected more. I allowed hype to get me. I expected 980 and 390X performance. Now I'm disappointed

I don't blame AMD. I clearly remember Raja claiming only >5 TFLOPS and that is in the ballpark of 390, so he was honest from the start. It was my own exaggerated expectations.

Anyhow card's still good value of money. I think I'll get one to replace my old 285 when AIB come with custom coolers.

June 29, 2016 | 11:58 AM - Posted by icebug

Is HDMI 2.0 no longer backwards compatible with DVI? You should be able to get an HDMI to DVI cable to use this card with DVI monitors without an additional adapter from DP as long as this is still supported. I use one on my HD 7950 and it works just the same as a DVI cable as far as clarity and performance.

June 29, 2016 | 12:09 PM - Posted by Anonymous (not verified)

Wow! This is the disruptive GPU Mr. Koduri was talking about? Awesome!


June 29, 2016 | 12:15 PM - Posted by David Gilmore (not verified)

Has anyone noticed the ranking of this card in Passmark? Their chart tells a very different story about this card's performance. RX 480 scores 6369 G3D marks, just a hair above GTX 770 (6145) and GTX 960 (5925). It doesn't even come close to the GTX 970 (8661). How do you explain the large 2300 G3D mark discrepancy, when your chart claims that the RX 480 beats the 970?

June 29, 2016 | 12:36 PM - Posted by Stefem (not verified)

Where have you seen the Passmark benchmark you are talking about?

June 29, 2016 | 12:56 PM - Posted by Anonymous Nvidia User (not verified)

Here you are.

June 29, 2016 | 06:45 PM - Posted by Stefem (not verified)

Thanks but I would advise you comparing results from different source is not a good practice. Now I'm really curious about the GTX 1060, looks to be a really interesting card, SMP, power consumption, clock speeds... looks it will be an hot summer ;)

July 4, 2016 | 09:50 AM - Posted by Anonymous Nvidia User (not verified)

Comparing results from other sources is called corroboration or validates your findings. Anyone can find one biased source. The more that you can find to prove your point the better.

Yeah the 1060 should be a good card for the masses. I'm guessing price will be $250 or less. I figured Nvidia would keep an ace in the hole. They have the budget to sit on things until they need to reveal them.

June 29, 2016 | 01:00 PM - Posted by Stix124 (not verified)

After watching reviews today I missed out by minuets on obtaining one. They sold out with in an hour of the store opening. Now I just have to sit back and wait until more come in stock

June 29, 2016 | 01:19 PM - Posted by Anonymous Nvidia User (not verified)

You are lucky. These cards draw more wattage than PCI express spec. Could possibly fry motherboard connector or PCI express power connector. They also run hot around 84C. If you're set on getting one wait a week or two for a non reference version with better cooling and or power delivery.

June 30, 2016 | 03:09 AM - Posted by Anonymous (not verified)

Do not get a model with 6-pin only.

PCPER was up to 200 Watts with a small overclock!

Even if your motherboard is not damaged it could corrupt the audio output. Ever moved a mouse and heard your audio squealing?

June 29, 2016 | 01:44 PM - Posted by renaissancemaster22

Lots of texture units @u@ i cant wait to start streaming with it

June 29, 2016 | 01:46 PM - Posted by Arkamwest

Thunder... Thunder... Thundercats, OOoohH....!!!

June 29, 2016 | 02:07 PM - Posted by Rick Rodriguez (not verified)

Why aren't we testing with Battlefield 4?!? Frostbite is still one of the most technically advanced game engines out there.

June 29, 2016 | 04:23 PM - Posted by Anonymous Nvidia User (not verified)

That's because they don't want Polaris to be shown losing to the 970 gtx. HardOCP dropped it from their benches because they said they didn't have time but added Hitman 2016. They said they plan to bring it back until Battlefield 1 comes out. Of course draw your own inferences here.

June 29, 2016 | 10:43 PM - Posted by Anonymous (not verified)

Surely the 480 would win in Battlefield 4 since it has Mantle. Maybe thats why they dont include it.

June 30, 2016 | 01:33 AM - Posted by Anonymous (not verified)

Mantle has been behind DX for a long time now. It was only useful on weak CPUs (AMDs lineup and the Intel i3).

July 4, 2016 | 10:07 AM - Posted by Anonymous Nvidia User (not verified)

Here is one source with Rx 480.,4616-5....


An example of what an AMD biased site would show.

Notice carefully that Anandtech had a 970gtx for the bench of Ashes because 970 loses but in Battlefield 4 the 970 is curiously absent. LOL

Tom's shows 54.5 fps for 970 and 47.7 fps for Rx 480 at 1440p. The 970 is slightly under 390 at this resolution. Would still be ahead if Anandtech included it as well.

June 29, 2016 | 02:21 PM - Posted by StephanS

So a R9 290 / GTX 970 class card for $240 ?

And the power efficiency doesn't seem much better, if at all, compared to a GTX 970 ?

nvidia seem to be kings of GpU architecture, optimization and design.
nvidia 28nm design beat AMD 14nm finfet ! This is not looking good for VEGA because the RX 480 is twice slower then the 1070, yet consume about the same power.

When nvidia release the smaller pascal chip for the 1060/1050,
AMD window of opportunity will be closed.

I think $240 is to high of a price for a R9 290 class card,
but $199 seem just about ok for the 4GB model.

But... nobody sell the 4GB model.

June 29, 2016 | 02:36 PM - Posted by rollover (not verified)

Please someone make a version where the power does not come in at the top! There has to be a better way!

June 29, 2016 | 02:48 PM - Posted by Jargon09

I like it

June 29, 2016 | 03:46 PM - Posted by ern88

The card is drawing more power from the PCI-E slot the the slot is designed for. Plus the RX 480 is using more power then the TDP was listed. If you have a cheap oem motherboard. I bet that the card will burn out the PCI-E slot on it. Old classic AMD making promises that they can't keep when it comes to power usage.

June 29, 2016 | 05:20 PM - Posted by Anonymous (not verified)

That is some pretty desperate trolling. In pcper's very precise power measurements, it draws almost exactly 150 watts, which is the rated TDP. This is also exactly what the pic-e slot and a 6-pin power connector are specified to deliver.

June 29, 2016 | 09:40 PM - Posted by HectorO (not verified)

Yes that's true. I'm glad PCPer tested this.

But with the smallest of overclock it's out of spec. I think it's necessary to see an aftermarket 8 pin model.

June 30, 2016 | 03:15 AM - Posted by Anonymous (not verified)

it depends on what applications you use however. Other sites have shown almost 90W from the PCIe bus when NOT overclocked.

and yes, people will overclock so 200W is crazy. AMD should have locked down the card and prevented overclocking with only a 6-pin power connector.

200W means that either the motherboard or the PCIe cable is going at least 25W over spec

June 30, 2016 | 08:42 AM - Posted by Stefem (not verified)

Agree, more considering the cost difference between a 6pin and an 8pin power connector

June 29, 2016 | 04:44 PM - Posted by Bezzell

What a bummer.

June 29, 2016 | 05:23 PM - Posted by Anonymous (not verified)

Oh, you expected the performance of a $700 card for $200 right? So just beating everything else in the price range is a disappointment? Complete BS. This is almost certainly going to be my next video.

June 29, 2016 | 04:52 PM - Posted by Geekavenger (not verified)

I understand it isn't a direct competitor to this card by design, but I would have really liked to see the 1070 added to the charts here. I am currently debating spending the extra money (once the prices come down to anywhere near MSRP) and getting a 1070 instead of a 480 and visualizing the performance difference would have been handy.

June 30, 2016 | 03:18 AM - Posted by Anonymous (not verified)

If you use MSRP as the price (the RX-480 will also be overpriced) and use the 8GB model for a fair comparison then the FPS per dollar is identical (on average).

380/240 = 58%

GTX1070 averages 53% faster->

both don't seem to overclock much more so far.

June 30, 2016 | 08:46 AM - Posted by Stefem (not verified)

I understand why they didn't added the GTX 1070 to the review but if you think for a moment that the GTX 1060 is the natural competitor of this card does this mean we will not see a GTX 1070 vs 1060 review? seems odd...

I'm confused now :)

June 29, 2016 | 05:09 PM - Posted by Anonymous (not verified)

It would be nice to be able to have a discussion with out all of the obvious FUD that occurs in any Nvidia/AMD story. This card is about what I expected, and is the clear winner in this price segment, at least until Nvidia has a new card in this segment. Nvidia does not offer any competition to this card right now. The absolute performance and performance per dollar graphs paint a very clear picture unless you are a troll.

Anyway, I find it interesting that this card seems to have 4 ACE instead of 8. The Xbox One has 2, I believe, and the PS4 has the full 8 used in most GCN cards after the first implementation. I missed the interview, so I don't know if Ryan discussed this with Raja. I guess this was probably done to save power? Although, it is unclear how the ACE units interact with the hardware scheduler. It would be great if someone could write an article about this, if the information is available.

June 29, 2016 | 05:10 PM - Posted by Orthello (not verified)

A couple of points about the 970 OC vs 480 OC arguement.

The AIB cards coming , and this is coming from Kyle Bennett at H , will comfortably clock between 1490-1600mhz range on the core. 1600 been a golden sample. 1500mhz+ been very common.

That's 970/980 level max oc. So the $$ are yet to be seen for the better coolers etc but AIB 480s are going be a lot better than a 970 is , aib vs aib even, as the 970 loses now in dx11 and badly in dx12.

Even in this review it shows the boost clock is throttled and under the max boost set by quite a margin. Its mainly due to the cooler and power limits on the bios i would think.

TDP is reduced with better cooling, so it will help a little. I don't expect better than 970 power consumption after OC however , which is way higher than pascal which is a bummer. So what you are basically getting here with an AIB card is 8gb ram vs 4 (3.5), much better performance (particularly in dx12) and reportedly very good overclocking.

480 AIBs will be out before 1060 is also.

June 29, 2016 | 05:58 PM - Posted by aargh (not verified)

>480 AIBs will be out before 1060 is also.
There will be no stock of ref 480 before 1060.
For a perfect launch amd needed 100k cards, not 10k.
Aib cards - did you mean aib cards with custom coolers? There are only 2 tease atm, msi/asus and 0 info about specs and availability.

June 30, 2016 | 12:37 AM - Posted by Orthello (not verified)

No stock , lol i can buy them now even in NZ. Not any cheaper than a 970 mind you here, but the value is better on the 480 still.

You missed one, leaks on Saphire Nitro as well.

I'm 100% sure we will see AIB 480s before 1060 .. its a paper launch 1060 on the 7th for sure, wait till end of month at least before you see ref 1060 at the earliest. Most reports on the AIBs is 2 wks, so mid July , maybe sooner. And if they have anything like the 1070/1080 shortages then yeah its an utter fail for NV.

So you will have AIBs 480s up against the ref 1060 on launch day for the 1060 is my pick, so could be quite a battle for that performance level.

Still good to see the RX480s are selling in large numbers even ref models.

June 29, 2016 | 06:51 PM - Posted by Stefem (not verified)

I still hear AMD's vice president and product CTO Joe Macri words:
“You’ll be able to overclock this thing like no tomorrow,” “This is an overclocker’s dream.”...

June 29, 2016 | 09:18 PM - Posted by Anonymous (not verified)

Some might stay up half the night fiddling with the WattManager trying to optimize power use, then have nightmares... on either company's card. Hopefully someone will share good tips as they get better at it.

June 30, 2016 | 02:35 AM - Posted by Anonymous (not verified)

The reference boards that you can buy now, are optimised for one thing: low price.
When the AIB partners start to ship their kit with better power delivery and better coolers, we will see :D

June 29, 2016 | 06:21 PM - Posted by WaltC (not verified)

"AMD is only able to run the Polaris GPUs at 1266 MHz while the GTX 1080 hits a 1733 MHz Boost clock, and difference of 36%."

This "review" needs a complete rewrite...;) Comparing the $600-$700 GTX 1080 with the $239 RX 480 is idiotic--but I see that doesn't stop you from doing it...;) AMD hasn't released its competitor to the 1080's called Vega and should be released around Christmas.

But, you could of course buy *two* 8GB RX 480's for X-fire if you were of a mind to, and have comparable performance, 2x the Vram for d3d12 games, and still be ~$200 less than a single 1080, etc. But if that's the kind of power you want you do better to wait for Vega, imo.

June 29, 2016 | 07:13 PM - Posted by Anom (not verified)

I think the comment was just comparing the clocks, not the performance. I think there is an expectation that with the same or similar process nodes, AMD should have been able to match Nvidia on clocks but they can't.

This can either mean Global foundries 14nm sucks, their engineers couldn't make an efficient architecture, or to keep costs low they are allowing low grade chips through to get volume.

Considering clocks are the way consumers get a boost from their cards for free, it's pretty disappointing how little you can clock these to and how much heat they produce in the process.

June 30, 2016 | 03:37 AM - Posted by Anonymous (not verified)

The comment makes perfect sense. They simply questioned what the deal was with the frequency being much lower than NVidia's. Pretty simple to me...

AMD has already explained this at PCPER. They started the GPU almost three years ago and were targeting mobile first so the design was optimized for lower frequencies. They switched to get a VR READY product available so were forced to raise the frequencies which is why it barely beats the GTX970 (which is the minimum VR READY GPU).

Buy two RX-480's for Crossfire and have comparable performance? Multi-GPU is not really the way to go yet. It's going to improve though it will take about two years or so to get proper support into game engines.

And considering the GTX1080 averages 85% faster than the RX-480 Crossfire makes even less sense->

If we use $650 for the GTX1080, and $275 for the RX-480 8GB which I think will be realistic once prices stabilize, then there is a $100 difference in favor of a solution that may barely bet the GTX1080 at times and other times have the GTX1080 up to 2X as fast in some titles.

In fact, we don't even know if AMD has the single-pass optimization for VR that can increase FPS by 1.6X. If not, and we use 85% then the GTX1080 will be 3X faster in some titles (or have much better visuals as both should have the same 90Hz).

June 29, 2016 | 07:04 PM - Posted by Prasanna Shinde (not verified)

Hi guys, can u please explain how is RX 480 is win for AMD?

Cause, GTX 1070 consume less Watts as compared to RX 480 (150W vs 150W), but gives significantly more performance even though 1070 is expensive.....

And if you look at another way GTX 970 also consume less or about the same Watts as compared to RX 480 and performs about the same.....

So, now my question is " did AMD didn't improve there architecture at all (like what NVIDIA did with MAXWELL), and only tool advantage of 14nm node OR I'm missing something??? "

And please this is not fanboy or flame war questions, this is genuine concern of mine!!!

And thanks for the reply :) :) :)

June 29, 2016 | 07:15 PM - Posted by Anom (not verified)

It's not an architecture win that's for sure. Nvidia has that without question.

It is a market win since it's a cheap card with good performance. The question for AMD is whether these are really profitable cards.

June 29, 2016 | 07:35 PM - Posted by Stefem (not verified)

I came to the same conclusion :)

June 30, 2016 | 12:50 AM - Posted by Orthello (not verified)

They improved their own power metrics vs 28nm generation hugely - not so much against NV. You would think by the way some people go on here about power consumption that it matters more than features / gaming performance and value.

Its of very low importance on my own scale of important metrics. Still i guess when you can't win Perf/$$ or best in segment points , that's all you have to make a negative point about.

I guess some will pay more for the 1060 to have better perf/watt then spend years catching that value difference up in their power bill lol.

Waiting for the arguements in the next gen over 10 watts differences .. every generation that goes by this will matter and less and less than it already does now.

June 29, 2016 | 07:47 PM - Posted by M (not verified)

folks are tools and do not see the whole story. RX 480 has ALOT more under the hood then comparable Radeons or Geforce cards, they are using very top of the line very new power circuitry on every CU which they have not had a chance to optimize for EVERY scenario, drivers for them are NOT optimized etc etc.

Why are "supposedly" GTX970 drawing less power but performing faster in Nvidia biased titles, I WONDER WHY, maybe cause it is Nvidia biased, maybe because the 970 is a much older card they have had time to optimize for, maybe the power circuitry that Nvidia used/uses is older so more "known" etc

RX 480 is a terrific design, I was looking at Radeon R9 380-380x about a year ago give or take and knew this was around the corner, so waited, I currently use a 7870 having owned 2 of them and kept the "better" one, this is costing ~$100+ less then my 7870s did, is using ~40w less not counting overclocking etc and also ~2-2.5x faster with double to quadruple the amount of memory, IMO that is a massive nice jump.

Keep in mind ALL electrical anything can have spikes unless you put a limiter on them, and in the case of high tech, these limiters can cause instant crashes if you are to severe with their limitations via cutin and cutout of power, obviously we do not want this, so there will be "play" in when the chip decides to regulate or not.

Anyways point is, to just base information on one or 2 points is BS when these things are amazingly complex, and while the GTX 1070/1080 are "faster" then 900 series they also chopped away some more and ramped up clock speeds to "make them fast" not as much parts to power, not as much power required, run them with lower clock speeds see how much performance they lose, give them multi-gpu capability via DX12 oh wait they chopped that away as well.

Long story short, I know myself and many others are quite happy with the results we see here from RX 480 considering the performance/power/price they have delivered and we know they WILL get better in time, not held back by proprietary BS Nvidia does and not suffer driver performance castration like Nv has done for decades now, it takes time to fine tune things like this, especially when your development team is MUCH smaller then direct competition, and if we were to go by what all Nv fanboys act, Radeon would not be competitive at all for decades, and that is simply far from truth its just disgusting

I know of one thing, Radeons have not used underspec components, massive raw amperage intentionally shortening component life, and did not chope things away and put limiters in place to hold back performance on purpose and still overcharge, can you say that about Nv, nope.

Anyways, to one above me, they didnt improve architecture like Nv did with maxwell ROFL, go do some reading instead of trollbaiting, and you will see Radeon did a great deal of optimization/improving with Polaris compared to what Nvidia did with Pascal which by all intents and purposes is just a highly overclocked Maxwell(which itself was more or less an overclocked/optimized Kepler)

June 30, 2016 | 03:43 AM - Posted by Anonymous (not verified)

Your comment about frequency is meaningless to most people. BENCHMARKS are what matter most. AMD could not get higher frequencies because the GPU was intended for mobile so couldn't overclock well (their words, not mine).

Who cares WHY NVidia is faster? They are, so I'll buy their product.

I am recommending an after-market RX-480 to those people in that budget however. I'll rethink that when NVidia has competition with a Polaris GPU.

Above $200 I only recommend the RX-480, GTX1070, or GTX1080 and none until prices drop.

June 30, 2016 | 03:44 AM - Posted by Anonymous (not verified)

edit: pascal, not polaris.

June 29, 2016 | 11:45 PM - Posted by Anonymous (not verified)

Welcome to September 2014 AMD. Glad you finally caught up with an almost two year old nVidia product on a 28nm process.

June 30, 2016 | 01:38 AM - Posted by OxKing (not verified)

This Card isn't a revolutionary rainbowpoopin Wondercard, but a step up for AMD. It feels a bit like AMD is still one step behind nVidia,
but at least they are still chasing it in some areas.
The only concern i have is that the market is already grassed up by nVidia's 970. I mean look at the Steam Presence of this card.
But also they may enough folks who haven't upgraded to this performance level yet and for them the 480 really is a no brainer in my opinion.

I have a feeling with the 8GB and DX12 capabilities it will age much better then the 970 and that it don't had shown it's full potential yet.
(There are still major Driver issues with GTA V and with the Powerstates in Idle, leading to about 7 Watt more powerdraw than it should.)
Some nVidia biased guys on youtube benchmarked it against an overclocked custom 970 while the stock 480 wasn't overclocked at all.
Im sure some "Greenhorns" already have seen those Benchmarks as the proof of superiority of nVidia over AMD. 8)
But anybody who argued about the 970 have the same or similar pricepoint now most likely forgot that the new 970 price only came to existence because the upcoming release of the RX 480.
(Also an GTX 1060 might very surely influenced by it, price AND Performance wise.)
Also it will be more future proof with it's 8GB, DX12 and modern Display Adapter Support.

I for myself was a bit disappointed by the benchmarks after all the hype, but if you rethink it, it still is a very good card overall.
So i decided to upgrade from my R9 280 that is with OC about 25% less powerful then the 480.
Mainly because of the newer tech inside, because i need freesnyc and a bit more performance now for my new 34" UWQHD Curved Monitor.
And i rather play with less details till Vega than buy a more expensive Card from nVidia that refuse to support freesync.
Otherwise i may had even bought the 1070 for a about 65% higher price.

June 30, 2016 | 02:49 AM - Posted by Jason Becker (not verified)

With seeing what other sites are showign of the power draw for this card. I would be very hesitant to get one. AMD seems to be really pushing the limits by only having one connector.

June 30, 2016 | 03:29 AM - Posted by Camilo625

Really like this detailed reviews

June 30, 2016 | 04:03 AM - Posted by SB (not verified)

Question coming from older hardware still running HD7970 Ghz edition, will it run good with I3-3220 so i can carry it around for another few months-year till i can save up for upgrades? my game collection mostly consist of DX9-11 titles so dx12 is not any of my concern atm.

June 30, 2016 | 01:19 PM - Posted by Anonymous Nvidia User (not verified)

Not sure if it will run well with your i3 processor as most sites only bench with latest Skylakes or extreme Intel processor. Nvidia seem to get more frames when game is CPU limited or directx 11 in general as well as 1080 resolution as processor power is more relevant. The power draw over PCI express spec for RX 480 might be a concern for your older hardware. If you can wait a month or so prices will be coming down because of saturation of new cards by AMD and Nvidia. If you want a stop gap card a Maxwell or even Fury or 300 series AMD cards should start being discounted soon. Don't buy this at full retail as resale value may be poor later and if you're getting by now keep saving.

June 30, 2016 | 05:41 AM - Posted by Batismul (not verified)

I think its a good strategy from AMD to capture the 85% GPU market in that price bracket to win some dGPU market share from Nvidia. If not overclocked it keeps the powerdraw down and people can play all their games at 1080p at full settings fine on budget to mid end pc's.

Lets be honest here that is where MOST PC Gamers are at with pc specs and 1080p displays so good move by AMD!

June 30, 2016 | 06:51 AM - Posted by Pp (not verified)

Why does GTA V show total vram as 6gb? Any explanation for that?

June 30, 2016 | 08:44 AM - Posted by SQVelasquez

I honestly haven't been this excited for the future of computer graphics in a while. Already ordered a 480!

June 30, 2016 | 12:01 PM - Posted by nwgat (not verified)

480 mem spec is wrong its 256GB/s and whats up with 5.1tflops? most other sites report 5.8tflops or something

256 for 8gb, 224 for 4gb

July 1, 2016 | 01:35 AM - Posted by Anonymous (not verified)

Does Wattman support older radeon series like r9 295x2?

July 1, 2016 | 05:55 AM - Posted by Ry (not verified)

How absolutely ridiculous is our society where we purposely disable stuff that works perfectly to achieve lower value..

Capitalism is absurd.

July 1, 2016 | 04:11 PM - Posted by Prodeous

Small correction on the first page. R9 Nano is 175W. Almost as close as the Rx480.

To be honest if Nano droped in price it would be more interesitng product then the rx480 by a nice margin...

Do wonder when they will adjust the prices as currently R9 390(x) and R9 Fury/Nano/FuryX make no sence in terms of price performance against Nvidia.. :(

Come on AMD drop the prices.. I want the Nano :)

July 1, 2016 | 05:51 PM - Posted by Anonymous Nvidia User (not verified)

I would return the card before it passes retailer warranty period. This power issue is turning serious. You can wait until either AMD rectifies the problem or partner fixes with 8pin connector and power delivery.

Quality control may have been overlooked to meet demand on a seemingly rushed card. Gotta love AMD Robert's response. It only effects a few out of the hundreds of reviews but failing to mention they didn't test the same way as those that found the issue. I'd call that trivializing and spinning at it's finest.

Maybe the bad cards have poor ASIC quality and shouldn't have been released in the first place.

Poor AMD. Maybe it was part of their master plan to fry your motherboard so that you can just upgrade to the shortly coming Zen with shiny new motherboard required.

PS I'm just kidding about the master plan. Lighten up

August 24, 2016 | 01:09 AM - Posted by Thetwitt (not verified)

This website is very good site.

October 2, 2016 | 06:38 PM - Posted by DrB (not verified)

@Ryan: if I grab the 480 8 GB . is there a way to force my w1064 bits to use the vram as its main . Ms as an annoying tendency to favor ram(at all cost)128 MB of ram ddr3 ? Would this force OS to use go 480 vram instead or gamer got to beg till we re omln our death bed

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.