Feedback

AMD Polaris Radeon RX 480 will launch for $199, more than 5 TFLOPS of compute

Author:
Manufacturer: AMD

AMD gets aggressive

At its Computex 2016 press conference in Taipei today, AMD has announced the branding and pricing, along with basic specifications, for one of its upcoming Polaris GPUs shipping later this June. The Radeon RX 480, based on Polaris 10, will cost just $199 and will offer more than 5 TFLOPS of compute capability. This is an incredibly aggressive move obviously aimed at continuing to gain market share at NVIDIA's expense. Details of the product are listed below.

  RX 480 GTX 1070 GTX 980 GTX 970 R9 Fury R9 Nano R9 390X R9 390
GPU Polaris 10 GP104 GM204 GM204 Fiji Pro Fiji XT Hawaii XT Grenada Pro
GPU Cores 2304 1920 2048 1664 3584 4096 2816 2560
Rated Clock ? 1506 MHz 1126 MHz 1050 MHz 1000 MHz up to 1000 MHz 1050 MHz 1000 MHz
Texture Units ? 120 128 104 224 256 176 160
ROP Units ? 64 64 56 64 64 64 64
Memory 4/8GB 8GB 4GB 4GB 4GB 4GB 8GB 8GB
Memory Clock 8000 MHz 8000 MHz 7000 MHz 7000 MHz 500 MHz 500 MHz 6000 MHz 6000 MHz
Memory Interface 256-bit 256-bit 256-bit 256-bit 4096-bit (HBM) 4096-bit (HBM) 512-bit 512-bit
Memory Bandwidth 256 GB/s 256 GB/s 224 GB/s 196 GB/s 512 GB/s 512 GB/s 384 GB/s 384 GB/s
TDP 150 watts 150 watts 165 watts 145 watts 275 watts 175 watts 275 watts 230 watts
Peak Compute > 5.0 TFLOPS 5.7 TFLOPS 4.61 TFLOPS 3.4 TFLOPS 7.20 TFLOPS 8.19 TFLOPS 5.63 TFLOPS 5.12 TFLOPS
Transistor Count ? 7.2B 5.2B 5.2B 8.9B 8.9B 6.2B 6.2B
Process Tech 14nm 16nm 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $199 $379 $499 $329 $549 $499 $389 $329

The RX 480 will ship with 36 CUs totaling 2304 stream processors based on the current GCN breakdown of 64 stream processors per CU. AMD didn't list clock speeds and instead is only telling us that the performance offered will exceed 5 TFLOPS of compute; how much is still a mystery and will likely change based on final clocks.

View Full Size

The memory system is powered by a 256-bit GDDR5 memory controller running at 8 Gbps and hitting 256 GB/s of throughput. This is the same resulting memory bandwidth as NVIDIA's new GeForce GTX 1070 graphics card.

AMD also tells us that the TDP of the card is 150 watts, again matching the GTX 1070, though without more accurate performance data it's hard to assume anything about the new architectural efficiency of the Polaris GPUs built on the 14nm Global Foundries process.

Obviously the card will support FreeSync and all of AMD's VR features, in addition to being DP 1.3 and 1.4 ready. 

AMD stated that the RX 480 will launch on June 29th.

View Full Size

I know that many of you will want us to start guessing at what performance level the new RX 480 will actually fall, and trust me, I've been trying to figure it out. Based on TFLOPS rating and memory bandwidth alone, it seems possible that the RX 480 could compete with the GTX 1070. But if that were the case, I don't think even AMD is crazy enough to set the price this far below where the GTX 1070 launched, $379. 

View Full Size

I would expect the configuration of the GCN architecture to remain mostly unchanged on Polaris, compared to Hawaii, for the same reasons that we saw NVIDIA leave Pascal's basic compute architecture unchanged compared to Maxwell. Moving to the new process node was the primary goal and adding to that with drastic shifts in compute design might overly complicate product development.

View Full Size

In the past, we have observed that AMD's GCN architecture tends to operate slightly less efficiently in terms of rated maximum compute capability versus realized gaming performance, at least compared to Maxwell and now Pascal. With that in mind, the >5 TFLOPS offered by the RX 480 likely lies somewhere between the Radeon R9 390 and R9 390X in realized gaming output. If that is the case, the Radeon RX 480 should have performance somewhere between the GeForce GTX 970 and the GeForce GTX 980. 

View Full Size

AMD claims that the RX 480 at $199 is set to offer a "premium VR experience" that has previously be limited to $500 graphics cards (another reference to the original price of the GTX 980 perhaps...). AMD claims this should have a dramatic impact on increasing the TAM (total addressable market) for VR.

In a notable market survey, price was a leading barrier to adoption of VR. The $199 SEP for select Radeon™ RX Series GPUs is an integral part of AMD’s strategy to dramatically accelerate VR adoption and unleash the VR software ecosystem. AMD expects that its aggressive pricing will jumpstart the growth of the addressable market for PC VR and accelerate the rate at which VR headsets drop in price:

  • More affordable VR-ready desktops and notebooks
  • Making VR accessible to consumers in retail
  • Unleashing VR developers on a larger audience
  • Reducing the cost of entry to VR

AMD calls this strategy of starting with the mid-range product its "Water Drop" strategy with the goal "at releasing new graphics architectures in high volume segments first to support continued market share growth for Radeon GPUs."

So what do you guys think? Are you impressed with what Polaris looks like its going to be now?


May 31, 2016 | 10:17 PM - Posted by Technolucas (not verified)

I love AMD's pricing......aggressive might be an understatement. This is a win for the consumer even if you prefer green over red. Now hopefully Zen can do similar things in the CPU market.

June 1, 2016 | 01:50 AM - Posted by Mr Buster (not verified)

Don't count on it. They did say about "enthusiast market" when talking about Zen. I wouldn't expect great deals $ wise there.

June 1, 2016 | 05:44 PM - Posted by Anonymous (not verified)

That doesn't mean Zen is going to equal Intel in pricing if they did no one would buy them, they are very likely going to offer 8-Core Broadwell/Haswell at Quad Core or i7-5820k Intel pricing at most.

June 8, 2016 | 10:53 PM - Posted by Erick (not verified)

Not quite... I'd expect the 8-core to be around $500, their 16-core to match the 5960x "pricing".

Don't expect super cheap chips if they expect to compete with intel. $500-700 is where the pricing will be for a safe bet.

May 31, 2016 | 10:22 PM - Posted by Anonymous (not verified)

Look at the size of it. Its a small PCB. Cant wait to see AIBs take on it for SFF.

May 31, 2016 | 10:23 PM - Posted by Anonymous (not verified)

HOLY MOTHER OF GOD!!!!!!!!!!!!!!!! THATS AMAZING FOR THE PRICE !!! NVIDIA ARE IN SERIOUS TROUBLE !!!!!!!!!!!!!!!!!!!!!!!!!11

May 31, 2016 | 10:27 PM - Posted by Frank Gao (not verified)

$199 for a card between 970 and 980 would be extremely respectable... if not for the Nvidia Pascal release. I guess AMD really is going for the mid-low end market as the rumors told.

Don't get me wrong, the idea of something performing like this landing in the mid-low end market would be such a earthquake a couple months ago... But I personally think they were a step late, and failed to impress when compared to the Pascal architecture from the opposing team.

May 31, 2016 | 10:32 PM - Posted by Frank Gao (not verified)

However I do like this move: the lower price and decent performance will mean higher adoption for VR => more VR content => happier me.

May 31, 2016 | 11:41 PM - Posted by Anonymous (not verified)

It's not impressive, but it does play with performance in that space. If it's in that performance range for that price, it does steal some of the 1070's thunder. That's sort of the only card you would go with once you leave the AMD 300 series cards. Since there's a $150-$180 price difference between the two it puts nVidia in a weird place until they get something there. They either have to lose out on the more aggressive pricing, or lower the price of the 1070 to make it even more compelling. That's not to say the 1070 is over priced.

June 1, 2016 | 12:46 AM - Posted by nanoflower (not verified)

There's also the likely introduction of other products in the P10 line. I fully expect to see a 490(x) that uses the full 2560 Shaders get talked about as we get closer to launch. That product will likely be close to the $300 mark.

June 1, 2016 | 01:16 AM - Posted by Anonymous (not verified)

If AMD can match the performance of the 1070 with a smaller die part, then that is very good for AMD. They can offer it for a much lower price. The 1070 is a salvaged part, so they are selling a large 1080 die with defective units as a 1070. It will be interesting to see actual die size comparisons. AMD may have made a smart choice going with a smaller die part first. I am doubting that the performance will be a match for the 1070 across the board though. I suspect it will be win some, lose some at best. Even if it is lower performance, the rumored lower price will probably make up for that.

June 1, 2016 | 08:06 AM - Posted by Penteract (not verified)

All semiconductor manufacturing works this way, regardless of CPU. GPU, or other. If it's made on a wafer, then the parts are binned. Binning means each device is tested; the ones that have full function are sold at the highest level; the rest are sold as less capable versions of the same product. It's been going on since integrated electronics started being produced from thin slices of silicon crystals. Since the 1960's.

So I suggest if you don't want to buy any "salvaged" parts that you buy the highest performing, most expensive part in the line-up, because everything else is a version of it with a defective area that has been "cut down".

June 1, 2016 | 09:01 AM - Posted by Anonymous (not verified)

Did I say salvaged parts are bad? Did you read what I posted? Almost all of the parts we buy are "salvaged" in some way. It is not uncommon to include stuff like redundant cache blocks and such which can be fused off such that even the "full" version still has fused off portions. I didn't say that salvaged parts are bad. The RX 480 will probably come in a 480 and 480X (or whatever they choose as an indicator) variants with the standard variant being slightly cut down and/or lower clock. What I was saying is that the 1070 is still a ~300 mm2 die which makes it more expensive to produce than a much smaller die part. If AMD can match the performance of the 1070, or even come close really, then that puts AMD in a very good position. Nvidia would be trying to sell ~300 mm2 parts against AMD ~200 mm2 parts. Which one do you think is cheaper to produce?

If the 1070 is much higher performance than a 480x then this may not leave AMD in a bad position either, since Nvidia will not have anything in the $200 price range to compete with AMD 14 nm parts for a while. Are you going to buy an old 28 nm 970 or 980 now that the 16 and 14 nm parts are out?

June 2, 2016 | 03:42 AM - Posted by Anonymous (not verified)

NVidia (and by that i mean the sellers like ASUS etc) would never drop the price of their cards if they are selling fine.

Even if AMD was a far, far better deal at the same performance level it still wouldn't make sense to drop the price of cards if they are all being sold.

Since there is currently no overlap it's not even an issue. The GTX1070 and GTX1080 cards are going to sell extremely well until AMD has something to compete.

May 31, 2016 | 11:55 PM - Posted by Camm (not verified)

Nvidia have nothing to compete with AMD in this price bracket, unsure why your fud'ing.

June 1, 2016 | 03:08 AM - Posted by Irishgamer01

Agree, except for the price.
At 299 it would have been a useless exercise. Nvidia hit them with a fatal blow with the 1080/1070. And here is the big BUT!!!!
But at 199, its a different story. (Well let say 230 as who wants the 4GB model)

The price the key. It will appear in loads of dells and HP's.(Sticker system prices would be very attractive.)

You can crossfire and still save money on the price of a 1080.
(Nvidia may want to reconsider the 699...opps 599 price tag.

Surprised me. Most people will still want the Nvidia card
but money talks. I think most people struggle to get 700 together, but 199-230 is do able and you still might be able to pay this months rent.

I guess I will have to drop the price of my 290x's on ebay.

June 1, 2016 | 02:59 PM - Posted by Coupe

970-980 is Mid High end

June 1, 2016 | 09:05 PM - Posted by Anonymous (not verified)

This 'mid-low end market' you appear to be sneering at accounts for most of us, according to the Steam hardware survey: http://store.steampowered.com/hwsurvey For ~90% of us, our primary display is 1080p pr smaller. 980 owners seem keen to pipe up on forums, but we don't hear so much from unwashed masses still running their HD 6850s. These people (my people) will be looking at this new card and thinking, 'hmmm'.

June 3, 2016 | 08:28 PM - Posted by Marcelo Viana (not verified)

pascal architecture? Buddy, pascal arch is just maxwell overclocked.

May 31, 2016 | 10:27 PM - Posted by Anonymous (not verified)

Do we know the cost of the 8gb version of this card and will we see AIB models?

May 31, 2016 | 11:07 PM - Posted by quest4glory

I will guess it's that $249 number I saw somewhere, where the range was "between $199 and $249." A leak, but so many of them have been accurate as of late.

May 31, 2016 | 11:48 PM - Posted by Anonymous (not verified)

WCCFTech is saying $229 for the 8GB version?! WOW!!!! (if right!)

May 31, 2016 | 10:31 PM - Posted by rdjg22 (not verified)

I have no bias, best value wins. If it performs even just a bit better than a 970 it'll be worth it (at least for you guys in the U.S, here in Canada it'll be competitively priced at $600+).

May 31, 2016 | 10:34 PM - Posted by Anonymous (not verified)

"In the past, we have observed that AMD's GCN architecture tends to operate slightly less efficiently in terms of rated maximum compute capability versus realized gaming performance, at least compared to Maxwell and now Pascal"

On DX11, so once the DX12 and Vulkan APIs have wider support there should be some re-benchmarking for at least the past 2 GCN generations to see how the new graphics APIs has changed performance for those past GCN generations that did not fare so well under DX11. So those past few AMD and Nvidia 28nm node generations need to be retested using DX12 and Vulkan based games, while the latest Pascal and Polaris are tested against each other.

I'd like to see just what the new DX12/Vulkan graphics APIs can do for AMD's "GCN 1.1/Gen 2" and "GCN 1.2/Gen 3" older SKUs that have the async-compute features that where not taken advantage of under DX11/OpenGL whatever version.

I'd also like to see how AMD's SKUs from the previous 2 generations have improved via AMD's new commitment to driver maintenance also.

June 1, 2016 | 12:46 AM - Posted by Anonymous (not verified)

The 390x already seems to compare quite well in some of the DX12 games. We are still early for DX12 and Vulcan, so I think the performance of AMD parts will continue to improve as games and game engines are written with DX12 in mind from the start. Hopefully DX12 and Vulkan will not require as much game specific optimizations. I always found it ridiculous that drivers were optimized on game by game basis.

June 1, 2016 | 05:22 AM - Posted by renz (not verified)

Personally i think it is better for driver to be optimized by gpu maker per game basis. First no one knows the architecture better than the driver team themselves. look at DX12 right now. Why using DX12 end up being slower on nvidia card? So if the hardware did not work like amd GCN they can't exceed nvidia DX11 optimizatio? (even without async nvidia performance still lower in dx12 vs dx11 in ashes).

and second dev usually will move on when they finished with one game. Since optimization are done by developer and not so much by gpu driver when new gpu coming out two or three years later will these developer wiling to revisit their old games and make new optimization patch for new batch of cards? That's even if the game studio still exist and not disband by various reason.

June 1, 2016 | 09:28 AM - Posted by Anonymous (not verified)

I assume drivers contain compilers for compiling shaders. When is Microsoft going to start optimizing their compiler for my applications? I don't want to have to optimize my own code. Having massive driver packages with huge amounts of game specific optimizations is ridiculous. DX11 just seems like a bad design. It is like programming a performance critical application in a high level language. With a high level language, the programmer has little ability to optimize beyond the basic algorithm.

Why is DX12 slower on Nvidia cards? Probably because they essentially have not implemented asynchronous compute yet. AMD has essentially been working on DX12 based hardware for many years because Mantle is very similar to DX12. Even the original PS4 APU implements 8 asynchronous compute engines, I believe. The Xbox One chip only has 2 though, since it is an earlier GCN revision. With DX12 and Vulkan (Mantle derived) being lower level, there may not be that much optimization to do in the drivers. It should allow the game developers or engine developers to produce more optimized code from the start. AMD's DX11 driver may not be that well optimized, but they are probably ahead in DX12 development, for both hardware and software. Nvidia seems to have a better optimized DX11 driver so even when they get their DX12 driver better optimized, they may not gain much performance over their DX11 levels. AMD hardware has often had more raw compute than competing Nvidia hardware. With better optimized DX12 games, the old 390x is often matching much more expensive Nvidia parts.

June 1, 2016 | 12:37 PM - Posted by Anonymous (not verified)

"I don't want to have to optimize my own code."

Then you're not going to like DX12: the entire purpose of DX12 as a low-level API is to move the burden of optimisation from the GPU vendor to the game/engine developer.

"Why is DX12 slower on Nvidia cards?"
A better question would be "why does DX12 not provide the speedup for Nvidia that it does for AMD?", as in testing the delta between DX12 and DX11 on Nvidia cards is +/- very little if anything. And the answer there is because there is no more performance to squeeze.

June 2, 2016 | 02:53 PM - Posted by Anonymous (not verified)

It's that Nvidia lack the fully in its GPU's hardware async-compute features compared to AMD's ACE units fully in hardware async-compute ability.

And the Game developers that are mostly script kiddies will have automated Vulkan/DX12 Code designers to generate the hard to understand memory management features in generated code. So all gaming engine SDKs have automated code generators/plugin capabilities that allow for such inexperienced coders to still produce DX12/Vulkan ready code that can be later if need be optimized by a more experienced real programmer.

Just because DX12/Vulkan have difficult close to the metal features and a simplified device driver model does not mean that they will be unuseable by any novice programmers, as there will be SDK features to help generate the proper code to utilize the new graphics APIs.

June 2, 2016 | 02:59 PM - Posted by Anonymous (not verified)

No it's that Nvidia lacks the fully in its GPU's hardware async-compute features compared to AMD's ACE units' fully in hardware async-compute ability. Nvidia's has to resort to much slower software methods to emulate the async-compute hardware features that Nvidia's GPU hardware lacks, and that makes DX12 slower for Nvidia and DX12/Vulkan Games.

And the Game developers that are mostly script kiddies will have automated Vulkan/DX12 Code designers to generate the hard to understand memory management features in generated code. So all gaming engine SDKs have automated code generators/plugin capabilities that allow for such inexperienced coders to still produce DX12/Vulkan ready code that can be later if need be optimized by a more experienced real programmer.

Just because DX12/Vulkan have more difficult to master and close to the metal features and a simplified device driver model does not mean that these graphics APIs will be unusable by any novice programmers, as there will be SDK features to help generate the proper code to utilize the new graphics APIs. Most games developers use the SDKs and software tool chains/plugins provided by the gaming engine/gaming engine SDK makers, and there will be plenty of tools/Plugins available to automate the hard parts of DX12/Vulkan, just wait a while and see.

Whoops! Forgot to cut and paste the corrected version from the word processor, damn it!

June 3, 2016 | 12:34 PM - Posted by renz (not verified)

"Just because DX12/Vulkan have difficult close to the metal features and a simplified device driver model does not mean that they will be unuseable by any novice programmers"

this is the problem. it has been told since the very beginning that low level API is for those that know what they're are doing. pc is not console where the hardware is static for years. that's why you see underwhelming result in most current DX12 title. and most of them were considered as triple A stuff.

"It's that Nvidia lack the fully in its GPU's hardware async-compute features compared to AMD's ACE units fully in hardware async-compute ability."

look at AoS. async compute can be turn on and off. even with async compute being turned off nvidia card still take performance hit with DX12. so it is clear the problem is not about async compute only.

June 1, 2016 | 12:46 PM - Posted by renz (not verified)

But if you look at AoS even without async compute nvidia still get negative performance in DX12 vs DX11. So async compute can't be the only reason why nvidia end up being slower in DX12. if anything i'd suspect that dev are more used with GCN hardware because of console so it is much easier for them to get better performance from DX12 for GCN.

And there is problem for future cards as i mention before. Hitman dev mention that stuff like async compute needs to be tweaked per card basis because of different card will have different compute and bandwidth ratio. It is not a good idea if using newer card end up with poor performance because there is no optimization for new card for the game.

June 3, 2016 | 05:26 PM - Posted by Simon (not verified)

Newsflash, Async is used to speed up rendering... So its no wonder that they're slower when its not enabled, there's no need to look beyond that. Even the developers themselves told everyone so a while back in an open letter, enabling async was making things slower for nvidia, cause it ain't done in hardware. Too bad can't find the link on short notice, but its out there.

In other words:

Hardware async:
performance ==================OFF============ON
Software async:
performance ========ON========OFF============
----------------------- LOST POTENTIAL^^ had it been supported in hardware

Now you can see clearly, why a card, GTX1080 that is much faster in DX11 than the Fury X, is tied in AotS with a i7-6700k@4.5ghz because it doesn't profit from the lost potential and loses its lead compared to DX11.

As for tweaking, doubtfull and au contraire, it gives further evidence that its a case of lost potential, not lack of tweaking, the margins are the same in Hitman and AotS in 4k.

Hitman results: http://www.tomshardware.co.uk/nvidia-geforce-gtx-1080-pascal,review-3355...

GTX1080 advantage = 13.7%

AotS results: http://www.tomshardware.co.uk/nvidia-geforce-gtx-1080-pascal,review-3355...

GTX1080 advantage = 14.2% ( within margin of error of Hitman )

And 14% is a far cry from the 48% it has in DX11 games in the same review ( see the AotS link for DX11 BF4 and GTAV results above, which are also within margin of error ).

The numbers are consistent, which doesn't point to a tweaking problem, the Hitman and AotS engines are different, their developers and approaches have been different also, nothing to do with being used to GCN architecture or consoles, yet somehow they would've arrived in the same statistical margin of error? Highly doubtfull.

What is more likely is that we see the lost potential with Async OFF compared to ON in HW highlighted above being consistent across titles.

And that lost potential seems to be around 30% which is massive.

I rest my case

June 3, 2016 | 05:29 PM - Posted by Simon (not verified)

*EDIT* replace "is tied" with "is almost tied" in the above post^^ and offset the "LOST POTENTIAL^^" a bit to the right of "OFF"... format was lost after clicking submit, sorry!

May 31, 2016 | 10:45 PM - Posted by djotter

I'll wait for performance metrics, but if it is what I'm looking for, then at that price it is under import duty limits for where I live, so could buy from Amazon US and avoid the local price gouging. Bring on the reviews! :)

May 31, 2016 | 10:47 PM - Posted by Anonymous (not verified)

Ryan, tisk, tisk

I don't think even AMD is crazy enough to set the price this far below where the GTX 1070 launched, $379.

1070 launches on June 10 and only Founders Edition at $449.

May 31, 2016 | 11:05 PM - Posted by Ryan Shrout

Wait for June 10th before tsk-ing me please.

May 31, 2016 | 11:52 PM - Posted by Anonymous (not verified)

While I wait can you point me to your coverage on this issue

GeForce GTX 1080 Founder Issue fan issue
https://forums.geforce.com/default/topic/938975/geforce-1000-series/gefo...

June 1, 2016 | 08:03 AM - Posted by Anonymous (not verified)

Update the software you use to control the fan curves...

June 1, 2016 | 04:55 PM - Posted by Anonymous (not verified)

Try reading. That doesn't fix it.

June 1, 2016 | 03:21 AM - Posted by Irishgamer01

Tsk Tsk.......the guy is right.
1080 cannot be obtain (Sold out).
So the only option and price that I could see was the higher
priced founders editions. I searched online. (Was paid late so by the time my wages were paid they had all sold out). I did see cheaper editions listed but they didn't seem to ever be in stock. I was checking 3 times a day.

So to me the price I would expect to pay is the founders edition price as this is the effective retail price at the moment.

So I understand (but do not approve) why you want to pretend the retail price is the lower one, in actual fact it isn't at the moment. Until the cheaper cards are available I consider that guy right

So the 1070 will be the same.

But then two Polaris cards might make the 1070/1080 look even more expensive

May 31, 2016 | 11:18 PM - Posted by Anonymous (not verified)

Yes the early birds gets it up their worm holes for an extra good wallet draining from Nvidia. But even as costly as Nvidia's GPU SKUs are, you still get more cores per dollar on Nvidia's overpriced SKUs compared to those really pricy 10 x86 Intel cores on that E(for exorbitant) Series i7 enthusiasts' SKUs at $1700.00+ dollars! ZEN will be at a much more sane price point for those 8 cores, and with better integrated graphics once the Zen APUs are available.

May 31, 2016 | 11:32 PM - Posted by quest4glory

I hope you're right. AMD is the hero we need. I don't know if they can deliver, but we need them.

June 1, 2016 | 01:07 AM - Posted by -- (not verified)

ZEN + RX

blue and green can go cry in a corner together.

so long highway robbery prices!!!!!!!!!!!!!!!!

June 1, 2016 | 05:59 AM - Posted by arbiter

Early leaked results have the 480(x) needing CF to match the gtx1070. Even with what AMD claimed as their 480 CF beating a 1080, well that is a claim by AMD which can't really put much behind given the PR history of AMD to start with. Anyone that has some logic will agree AMD PR claims last few years have been suspect in claiming their product is faster then it ends up being when put to independent review. I am betting that 480 in CF maybe even 480x won't beat a gtx1070 in game dps that big of a margin, 10-15% or so.

Cue the AMD fanboy attacks.

June 1, 2016 | 09:53 AM - Posted by Spunjji

No need for attacks, don't doubt that what you're saying is true. I don't need 1070 performance, though, so I'm more interested in the bottom-end of the market moving up to make PC gaming more affordable for the newbies we need to bring in to keep it alive.

June 1, 2016 | 04:05 PM - Posted by arbiter

Not attacking just stating what we have seen from history of AMD's PR department. They have been a bit of a disconnect between them and tech side.

June 1, 2016 | 05:01 PM - Posted by Anonymous (not verified)

Unlike Nvidias right. Didn't they say Maxwell had async compute. They went as far as arguing with Oxide that it was not active in driver level, which to this day haven't turned on.

Pascal P100 was introduced with compute preemption, come GTX 1080 compute preemption turns into async compute.

June 1, 2016 | 10:03 AM - Posted by Alamo

doesnt make alot of sense, there isnt really any surprises left for the 480, performance will be right in the middle between a 970 and a 980, if a 1070 beats an sli 970, then scalling must be something like 30-40% instead of the usual 80%.
the 480 will make a real dent in 1070 sales, because crossfire 480 at 398$, against 449$ FE of the 1070 thats 50$ cheaper, and abou 30$ more than the cheapest possible 1070, it is a no brainer, add to that the fact that with a TDP of a 290X most ppl wont even have to invest in a new power supply.
anyway i dont think the 480 is 20% slower than 970, for it to lose crossfire to 1070, just no way, that would basicaly make it on par with 380X performance instead of 390.

May 31, 2016 | 11:32 PM - Posted by quest4glory

The cynical side of me says it will be addressed by Nvidia price drops and cut down products, which is of course good for competition, but that instead of more customers for AMD it will result in another Nvidia win. Basically, that's the "too little, too late" mentality, but I really hope this works. EDIT: Had a thought, Nvidia might rebadge the 970 and sell it for $200.00.

I am an Nvidia user for the past 12 years but I have owned both AMD and Nvidia. I bought a 390 for my stepson as a Christmas present, thinking a FreeSync monitor might come sometime in the future. Seems like that might have been a good idea, now he can take over when he gets a job.

June 1, 2016 | 01:45 AM - Posted by Anonymous (not verified)

If AMD has a small die part that can compete with the 1070, then Nvidia can't really compete very well on price. The 1070 is a salvaged part, so Nvidia is selling silicon equal to a 1080 die for every 1070. Nvidia doesn't seem to have a smaller die part available anytime soon(?). They apparently have GP104 at 314 mm2 and GP100 at a massive 610 mm2. I would consider 300 mm2 to be a large die for a 14 or 16 nm process. Yields are going to be a lot lower than the 28 nm process which has been tweaked for about 5 years now.

I would not buy a 28 nm part at this point, even for a relatively cheap price. The power consumption is going to be much lower on the 14 nm part, even if the performance is comparable. Also, I would not recommend buying a 3.5 GB 970. The lower memory capacity may be more of a handicap going forward since all of the current generation consoles have 8 GB and they make more than 4 GB available for the game; some memory is reserved for OS and background processes. Also, AMD parts seem to be looking a better under DX12; Nvidia parts do not.

June 1, 2016 | 03:45 AM - Posted by renz (not verified)

Nvidia already nail the hardest part (610mm2) so i don't think the smaller chip still not done. If anything they just waiting for GM206 stock to clear out before fully launching GP106. I see it more like amd want to avoid going head to head with nvidia in the same performance segment. because if they do so most people still end up getting geforce even if amd offering somethibg similar with cheaper price.

June 1, 2016 | 09:56 AM - Posted by Anonymous (not verified)

This is not how it works. nvidia has to compete will all the fabless chip makers for slots to manufacture wafers. When they make wafers, they can choose between making high-end, high margin chips, or lower-end, lower margin chips. The choice is obvious. AMD takes the other route because their high-end parts have tradituionally been less pushed and nvidia has a mindset advantage. AMD has the same problem wit Intel, which leads them to pursue lower margin markets, which give them less RnD money, and the cycle continue. AMD only comes out ahead when they ' luck' into making a great parts, once in a while. Liek the old 5870, the original Athlon, etc. Maybe Zen will be a repeat of that. Maybe Polaris will also be great enough.

June 2, 2016 | 02:39 AM - Posted by renz (not verified)

But it doesn't mean the final spec for the chip isn't done yet. Plus nvidia have quite close relationship with TSMC. back in 2012 when 28nm stil new nvidia once complaining about the capacity constraint making it difficult for them to meet demand. So what did TSMC do? They give a bit of qualcomm (soc) and amd (gpu) fab capacity to nvidia to satify nvidia demand.

July 15, 2016 | 05:43 PM - Posted by agendafx (not verified)

Plus nvidia have quite close relationship with TSMC. back in 2012 when 28nm stil new nvidia once complaining about the capacity constraint

Yup, money does tighten relationships to make the monstrobomination. Nvidia with enormous chips, 6-8 times larger than Qualcomm SoCs you mention, will always have problems especially on boggishly immature tech process as 1ST GEN HKMG for TSMC was. And as TSMC makes money on every wafer they make they need to offer some "wafers for free" as monstrous chips have a miserably poor yields, and nvidia yet again couldn fulfill demand with their niche vaporware products. So price went rampantly high for all GPUs in early 2012 and MAD was milking on their higher yield products so they didnt need excessive production as people which could afford it didnt really need cfx or 8xGPUs in their home computer ... an then the other legend was born GPU mining

May 31, 2016 | 11:07 PM - Posted by zgradt

RX is not much of a departure from R9. They're just going all Roman numerals-like. Strange choice though. Follow up with RXI? Marketing never considers the next step.

June 1, 2016 | 12:40 AM - Posted by BillDStrong

It's better than them going Hexadecimal, isn't it?

Kidding aside, its not that bad. And X in product names haven't done so bad in the Tech sector.

June 2, 2016 | 07:04 PM - Posted by Anonymous (not verified)

Nah, I don't think it's a Roman numerals thing.

I think at minimum, they want to move away from the R7/R9 segmentation in their GPUs, but they really like that X, so they're going to put it there instead.

It could be an attempt to change up their labeling scheme across the board, though. This is entirely conjecture, and is not a prediction in any way. But I think there aren't going to be 400X cards. I suspect that there won't be a 470 and 470X, 480 and 480X, 490 and 490X. As far as I know, there are only 2 Polaris chips, 10 and 11. There's no way they can fill all those sub-segments with those two chips in varying degrees of cut-down-ness. I think they're going to simplify their lineup significantly - 470, 480, 490, and whatever Fury v2 gets called.

But, they still really like that X. And I think they think that the R7 and R9 segmentation impacts their sales of the R7 cards. So they're going to turn their graphics cards into the RX series instead. They keep their familiar R(something) branding, they get to keep that X they like so much, they get a simplified product lineup that's a lot easier to fill up. Everybody knows the three digits after the R(whatever) is what matters, anyway.

If that turned out to be pretty close to AMD's reasoning, then it probably wouldn't surprise me if their Zen APU's come with an AX designation instead of A10/A8/etc. Then, along with Zen CPU's with the FX designation, their primary product portfolio all matches up nicely.

I kinda feel like it would be taking the theory a little too far if they changed their RAM from its R9/R7/R5/R3 to, say, MX, and their SSD's to SX. But who knows? Maybe their marketing department really is that crazy. lol

May 31, 2016 | 11:15 PM - Posted by biohazard918

This does not look like much of an upgrade from my 290 :(

Fewer compute units, less memory bandwidth slightly more tflops. But it uses less power thats what we look for in desktop gpus right efficiency?

Also I can't believe they had the stones to stand up on stage and show off a crossfire setup with 50% gpu usage and claim thats a good thing. They showed off a broken crossfire setup there is no other way to describe it.

Their press conference has left me feeling sick.

May 31, 2016 | 11:34 PM - Posted by quest4glory

Less hot.

June 1, 2016 | 01:51 AM - Posted by Anonymous (not verified)

Lower memory bandwidth compared to a 290/390 probably isn't an issue. AMD wouldn't want the added cost of a wider interface, but Polaris has several added and/or improved memory compression schemes that will make up for having lower bandwidth. Lower power consumption is a very good thing, even for a desktop GPU. It means cheaper cooling solutions and cheaper power delivery systems for those looking for a value part. For over clockers, it can mean more headroom with expensive power delivery systems and expensive coolers.

June 1, 2016 | 04:21 AM - Posted by Anonymous (not verified)

One word: Vega

June 2, 2016 | 10:27 AM - Posted by funandjam

It's not meant to be an upgrade for people who already have a card with the same performance level!

June 25, 2016 | 05:52 PM - Posted by Anonymous (not verified)

The 480 easily beats the 290.

May 31, 2016 | 11:21 PM - Posted by Jack160390 (not verified)

What about r9 390/x owner then....
I'm kinda regret haha...
RX 480 will be good card....

May 31, 2016 | 11:21 PM - Posted by BeeksElectric (not verified)

Been looking for an upgrade for my 7870, and this looks like the perfect price-performance point to pull the trigger. A Newegg cart with that and one of those LG ultra-wide monitors is going to be mighty hard for me to resist hit Purchase on come the end of June...

May 31, 2016 | 11:44 PM - Posted by zgradt

They're probably shooting for minimum VR specs according to Oculus. I expect it will perform slightly faster than the 390. That's AMD for you. They're motto should be "not the best, but good enough for most!"

I've seen a lot of comments that Nvidia's newest are overkill for 1080p. Why pay for extra performance that you don't need? Seems like taking a step back to me. I was hoping their new chips would be able to compete with the Fury.

June 1, 2016 | 12:43 AM - Posted by BillDStrong

Nvidia's target for that series is VR and 4K gaming. It is indeed overkill for today's 1080p gaming. Of course, the one constant in the games industry is that games will squander all the power you give them.

The last Arkham title proved that, didn't it?

June 1, 2016 | 01:55 AM - Posted by Anonymous (not verified)

It was unclear to me what exactly was wrong with the last Arkam title. It sounded like a bad console port. I was wondering if they designed the memory management around having a large unified memory space as with a console, and then couldn't get it work smoothly on PC hardware. A lot of PC gamers are probably playing with less than 4 GB graphics memory still.

June 1, 2016 | 02:30 AM - Posted by quest4glory

Just about everything is wrong with it.

There are effects they didn't bother to port over to PC, such as bokeh depth of field in the zoomed in view, etc. That's just for starters. Simply put, the game actually looks better on a console. That is disgusting.

Jump in the Batmobile and drop into the 50 fps mark, even with an overclocked 980 Ti / TITAN X.

SLI was not implemented and will not be implemented.

The game crashes at random points, on a stable machine. This is after all of their patches. Most of the patches were to bring in the DLC. Not all of which came over from console (PS4 exclusive DLC, for instance.)

June 1, 2016 | 04:30 PM - Posted by jimmyjohn (not verified)

Don't knock 144hz Vsync until you have tried it...

June 1, 2016 | 12:29 AM - Posted by Zerreth (not verified)

Any Fury or Nano Crossfire benchmarks against a 1080 out there ? Might be an early indicator of projected performance.

June 1, 2016 | 12:46 AM - Posted by Anonymous (not verified)

I'm usually amd bashing for good reason but u guys are being harsh. It's as if u guys forgot the 3dfx days. Things have been crazily overpriced for the pic lately. This is the first step in the right direction. Kudos amd for understanding what is necessary to keep the pc market striving. I know nvidia is just going to come out with something or price drop to counter and it's about time we have some real competition and choices to counter these consoles.

June 1, 2016 | 12:50 AM - Posted by General Lee (not verified)

If this card can reach 390X/980 perf for 229-249$ for the 8 GB model, it's a really nice deal.

The thing is, nothing about Polaris looks like it has an edge over Pascal. It seems like GCN has gotten stuck below the 1.3 GHz line and is not budging, while Pascal is blazing past 2 GHz overclocks. This could really end up hurting AMD in the long run. GP106 might easily surpass RX 480 in its category in just a few months.

If you look at GTX1070 that's a sub 150W card just like RX480, it offers up to 6.5 TFLOPS with boost clocks, while AMD's is rated somewhere between 5-5.5 TFLOPS. Historically Radeons have had worse efficiency, so this doesn't paint a very good picture. Of course that 150W number is probably just the 6-pin connector limit, so for all we know RX480 can be anywhere between 75-150W.

Personally I'm hoping RX480 is the cut down chip, and the full P10 is 2560 cores and bundled with GDDR5X, and I'm seriously hoping I'm wrong about GCN not clocking past 1.3 GHz. Lisa Su did mention Polaris will be between 100-300$, which could be interpreted that there is a higher tier 300$ card yet unannounced. That could theoretically be enough to compete with a 1070.

I'm using a 970 right now, so I'm looking at upgrading to at least 1070's performance category, or close if AMD could just manage that at 300$. I'm just disappointed if AMD really doesn't even come close to the 1070, and Nvidia's promises of a 379$ 1070 end up a red herring, and in reality I'm going to end up paying 500€ in Europe for that card, and there's no competition for it until next year.

June 1, 2016 | 02:02 AM - Posted by Anonymous (not verified)

The 1070 is a salvaged GP104 die though. If AMD is even in the same ballpark with a smaller die part, then it could be a great deal, both for consumers and AMD. Nvidia can't really go that low on price; they are still selling a ~300 mm2 die whether it is as a $700 1080 or a much cheaper 1070.

June 1, 2016 | 03:57 AM - Posted by renz (not verified)

Nvidia probably able to sell it cheaper if they want to. The thing that most people are forgetting is back in 2010/2011 nvidia 300+mm2 chip were meant for sub $250 card (GTX460/GTX560Ti). The original GK104 probably was meant for GTX660 but seeing how their can ramp the clock into competitive territory vs 7970 suddenly chip meant for sub $250 market being shift into sub $500 market. Then just look at 980 vs 970 pricing and 1080 vs 1070. Why there is very large price different between the x80 and x70 part? Did nvidia undercut themselves too much with 970 and 1070 pricing? Honestly i don't think so. Instead they just 'milk' consumer with 980 and 1080.

June 1, 2016 | 09:58 AM - Posted by Anonymous (not verified)

I would suspect that 16 nm wafers are alot more expensive than 32 or 28 nm wafers from back then. Process tech has just been getting more and more expensive. Also, I suspect that yields are much lower on the new 16 nm and smaller processes than they wer on 28 nm, further increasing cost per die. These smaller processes have taken significantly longer to develope than previous nodes and I have not heard anything good about yields. I wouldn't be surprised if a 300 mm2 die at 16 or 14 is closer in cost, or maybe much more expensive than, a 600 mm2 28 nm die. Nvidia's 600 mm2 die on 16 nm for the HPC market seems to be more in the $10,000 to $15,000 range with HBM2. Inwould expect that to reflect the production cost, to some extent. On 28 nm though, intro price for a 600+ mm2 AMD Fury card with HBM was $650.

June 1, 2016 | 09:58 AM - Posted by Spunjji

"It seems like GCN has gotten stuck below the 1.3 GHz line and is not budging"

That's like saying AMD were "stuck at" 2Ghz with the Athlon 64 while Intel rampaged over 3Ghz with the Pentium 4. It's true, but has absolutely nothing to do with performance.

Agreed about the upgrade prospect over a 970 as I own one too, but do we really need new cards? Like, really? XD

June 1, 2016 | 10:51 PM - Posted by General Lee (not verified)

Clock speeds absolutely have to do with performance. Pascal and Polaris aren't that different in terms of architecture, so the comparison to ancient CPUs isn't valid. The reality is that GCN has hovered between 1-1.2 GHz since its inception, while Nvidia has managed to raise the bar from Kepler to Pascal significantly. That's one of the main reasons Maxwell did so well and AMD lost a ton of share. Imagine what Polaris could do with clocks the same as Pascal? It's understandable if they can't reach quite as high clocks due to architectural differences, put just looking at pure percentage increases from previous gen, I'd say there's cause for concern between AMD vs. Nvidia improvements. We'll see more once we get to OC Polaris, and if Vega is done on TSMC, we'll see how much the process changes that.

And yes, I do need new cards for my 40" 4K monitor pretty desperately. I'd like to wait for a cut down Vega part maybe, or the Nvidia equivalent, but the prices are getting out of hand. So I'm looking at a 1070 for a stopgap solution.

June 1, 2016 | 01:06 AM - Posted by -- (not verified)

team green just got fking CRUSHED!!!!!!!!!!!

June 1, 2016 | 02:13 AM - Posted by arbiter

they just got crushed that early benchmark's that takes 2 of those 480's in CF to match a gtx1070, and would be same price if you have 4gb cards.

June 1, 2016 | 02:32 AM - Posted by quest4glory

Mm, no, they are showing 2 480s in CF beating a 1080. For at least $100 less, if not more (when buying a Founders edition, etc.)

June 1, 2016 | 05:29 AM - Posted by renz (not verified)

And that is not really strange. In the past people often get 2 mid rage card to get equal or better performance and the fastest single gpu. And often that dual combination end up being a whole lot more cheaper. the fastest single gpu will odten command a premium price. And most often it did not justify the cost increase vs it's much slower brother. Plus multi gpu have their own set of issues.

June 1, 2016 | 06:03 AM - Posted by arbiter

It beat a 1080 in a game that is massively AMD optimized, o and as a lot of people seen weren't even using same settings on both games. Yea one video on right was running at higher settings then the left. Plenty of people seen it.

June 1, 2016 | 12:50 PM - Posted by quest4glory

More benchmarks to come.

June 1, 2016 | 04:01 PM - Posted by arbiter

AMD can claim faster using their cherry picked benchmarks using their cherry picked settings. I like a lot of people will wait til independent reviews do their tests and give us the truth.

June 1, 2016 | 05:02 PM - Posted by Anonymous (not verified)

In the meantime you sure are spreading FUD all over several forums.

June 1, 2016 | 07:50 PM - Posted by Anonymous (not verified)

Honest question - were you this skeptical when Huang pointed up at the screen and said, "And it only runs at 67°C"? Did you tell people that Nvidia used a cherry-picked render with cherry picked settings, and to wait until independent reviewers do their tests and give us the truth?

June 2, 2016 | 10:32 AM - Posted by funandjam

That is an excellent question, I like your style.

June 2, 2016 | 03:00 PM - Posted by Anonymous (not verified)

Thanks. I'd bet a nice shiny US Quarter that he won't come back and answer it, though. He'll ignore it, while simultaneously spreading the same argument on other sites on the net.

June 2, 2016 | 10:20 AM - Posted by Anonymous (not verified)

Last time I tried to post a link the post got eaten, so we'll see if this one survives. But the settings used - and the reason the 1080's image quality looks different - was explained already, and it's apparently not AMD's issue, it's Nvidia's.

https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_q...

June 1, 2016 | 02:04 AM - Posted by Anonymous (not verified)

The seems fairly positive start to finfet for AMD i think.

That CFX score against the 1080 in ashes seems low .. it should be higher and i wonder if the cpu or scaling was off. That 51 % usage could indicate something there. Either way knowing AMD drivers and history that will only improve over time and you don't need to guess what will happen to maxwell performance over the same time.

I own TitanX SLI and i'm not looking forward to my cards longevity now that pascal is out driver wise. 680 got smashed by the 7970 eventually and the 290x in some games competes with the 980 these days originally the 780ti beat it. AMD ages well .. nvidia not so.

Personally i'll wait for Vega but this card looks great . Lets hope it overclocks well too.

Even when Nvidia counter it at the price i think AMD will still gain market share as many people are sick of ngredia and their antics of late. The Founders addition tax thing is really just the tip of that iceberg. Talk of premium PCB etc .. lets face it , it was a bog standard PCB with a shoddy cooler and they call it premium components lol ..

June 1, 2016 | 02:25 AM - Posted by Anonymous (not verified)

The higher founders edition price seems like Nvidia saying "we don't have very many of these available, so don't try and buy them yet".

June 1, 2016 | 02:33 AM - Posted by quest4glory

It also seems like "we'd better get what we can out of these before people wise up."

June 1, 2016 | 02:20 AM - Posted by Anonymous (not verified)

It looks like the RX 480 probably will not compete directly with more expensive Nvidia parts, which isn't particularly surprising. Given AMDs drive for multi-GPU support, I am wondering if the idea will be to compete with Nvidia via multi-GPU. A pair of Polaris GPUs can probably outperform a single 1080 and also be quite a bit cheaper. It seems like dual GPUs will be a good solution for VR as long as the software is optimized to take advantage of it.

June 1, 2016 | 02:34 AM - Posted by quest4glory

The idea isn't to compete with Nvidia on the high-end with these Polaris parts. The idea is to offer value to the consumer, provide an entry point into VR that is actually affordable, and yes, take some market share if they can. They are competing on price, not raw performance, with these parts. There is absolutely nothing wrong with that.

June 1, 2016 | 04:00 AM - Posted by renz (not verified)

But for how long? It's not like nvidia totally ignores sub $300 market either.

June 1, 2016 | 02:44 AM - Posted by Anonymous (not verified)

I think RX480 will do well vs Nvidia 1070/1080 the moment it lands in consoles. Hopefully it will not only be PS/XBOX but also fixed hardware Steam Machine.

June 1, 2016 | 03:37 AM - Posted by Anonymous (not verified)

http://www.hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_i...

"In the simplest terms AMD has created a product that runs hotter and slower than its competition's new architecture by a potentially significant margin."

Kyle was right all along, GTX 1070 150W, RX 480 150W, much slower performance.

June 1, 2016 | 04:35 AM - Posted by Anonymous (not verified)

AMD got even the name is wrong. It should be 'Radeon VR 1080' if they want to gain market share.

June 1, 2016 | 10:01 AM - Posted by Spunjji

There's no bad product, only bad pricing. This is good pricing.

Crucially this should be much cheaper to manufacture than a 1070, so unlike with last gen they can sustain the price advantage even though they're not hitting the power numbers they presumably wanted.

June 1, 2016 | 04:41 AM - Posted by StephanS

So AMD fastest card for the next 6 month is still the FuryX ?
A card that is slower then the GTX 1070 ?

And AMD best 14nm card is the Radeon 480 and its slower or equal to an old hawaii 290x ?

Did I get all that correct ?

So the $200 Radeon 480 render the r9-390x and ALL $180 to $400 hawaii card obsolete.
Unless AMD drop the price to $180 for a R9-390x .. not likely.

And nvidia just killed the entire fiji line. or will AMD sell the 4GB Fury X for $299 to compete with the much faster 8GB GTX 1070 at $350?

Unless AMD got something hidden, they are about to lose for good the highly profitable high end market.

And for VR, people dont buy a $800 VR headset + $900 PC to pair both with a $199 GPU. This is GTX 1080 land.

This is great news for the $200 GPU consumer market, but bad news for AMD. Because the GTX 1060 is coming...

June 1, 2016 | 05:51 AM - Posted by Anonymous (not verified)

Wrong on a couple of counts i think.

Slower to or equal to hawaii 290x , i doubt it . Until its released we won't know for sure but i think on par with 390x or better. The videocards.com 3dmark leak shows it above 390x and gtx980 and it appears to be a legit leak.

AMD did what nvidia just did, rendered their last gen hi end cards as not a good option to upgrade too now. Doubt you will see people buying 980tis or TitanX cards now that the 1070 is out for less.

Vega is coming and that is the high end replacement/upgrade. The RX 480 is a midrange card and was never promised to be anything more. For the price this will punch well above its weight too.

I agree with you on one thing the VR focus is misguided. Value for money , Dx12 ready card for 1080p high / possibly 1440p high settings is what it is.

GTX1060 is coming .. nvidia attacking the lower end .. just as AMD roll out vega .. watch out 1080 ... going to be interesting to watch this unfold.

June 1, 2016 | 06:11 AM - Posted by arbiter

480 is claimed to be between 980/970 which is 390/390x performance range.

Vega well that is another 4+months away at least.

480's performance is still up for a lot of debate, Yes everyone is hyped cause the AOTS thing they did but come on. As bunch of people posted on youtube, they could see a graphical difference between the 2. One was running at higher settings then the other and just me saying that should pretty clear which that one would be and no it wasn't the AMD side. I would love to see the footnotes AMD would have to provide with that side by side comparison to see what exactly they did, like they did with the fury x vs 980ti ones they did.

June 2, 2016 | 10:22 AM - Posted by Anonymous (not verified)

Your wish is granted.

https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_q...

June 1, 2016 | 06:19 AM - Posted by maonayze (not verified)

Vega is coming to compete with 1080. AMDs best 14nm card is not an RX480 and you know it. They have set out to capture market share in the mid tier sector. They have put a $199-$229 card with the equivalent performance close to a Fury Pro/Nano at that price. This has made quite a few people sit up and take notice in that price bracket and there are more cards sold in that bracket than anywhere else....go ask 970 owners.

The Fury line was an experiment in HBM technology and it has almost dried up in most places for stock so maybe it will be dropped in price when Vega comes out, so what? At least Fury cards will still get good support in 5 years time...unlike Maxwell.

I am also betting that the entry level for VR headsets will come down to compete with PSVR, although not the Vive or Oculus. They aren't the only ones due to come out, there are lots more coming soon. So this BS about it's 1080GTX land is a crock of shit. I know a lot of people who are quite happily VR gaming on a 970/980/Fury so now they can do it for a few hundred dollars less. If you don't see that as a good thing then you are blind.

Lisa Su also said that Polaris will cover the $100 - $300 market...but the RX 480 is coming in at $229 for the 8GB version, so what does that tell us? It tells us that there could very possibly be another Polaris waiting to take on the 1070 at probably the $279-$299 price point.

We were never told which Polaris chip was in the RX 480....what if it was the higher P11 or a lower end P10?

We will worry about the 1060/50 when it turns up....oh when I say worry, I mean we wont worry because it may mean that Nvidia will have to compete on price and that is good for consumers like us. When will some people learn that if Nvidia gain total control then we will continue to pay extortionate amounts for gfx cards with little advancement....same as Intel CPUs. Good start for AMD :-)

June 1, 2016 | 08:12 AM - Posted by renz (not verified)

What about 970 owners? 480 most likely not the replacement they want. Fury will get 5 years support? If anything the one that actually will drop driver support first will be AMD. We see GCN still get optimization because amd still not moving away from the architecture. Go ask HD4K user how long did it take amd to cut driver support. And nvidia already compete on price. Ever since 900 series. Just that at the same time they still able to play the premium game.

June 1, 2016 | 10:05 AM - Posted by Spunjji

Given that the Oculus Rift is $600 and you don't need to spend more than $500 on the hardware to build a VR capable PC around it, your maths is already off by $600.

Perhaps this explains why you don't see the value in a $200 card for a total cost of $1300 instead of your $2400 GTX 1080 proposal.

June 1, 2016 | 06:49 AM - Posted by Anonymous (not verified)

Nvidia answer to this will be the 1060

June 1, 2016 | 10:06 AM - Posted by Spunjji

It surely will, and now they won't be able to price gouge with it either.

June 1, 2016 | 10:30 AM - Posted by Kareha (not verified)

I feel sorry for the members of PcPer who read some of these comments. Would be nice if some moderation could be done to remove all the fanboyism. Why are people so into cheering for multinational corporation is beyond me.

June 2, 2016 | 10:37 AM - Posted by funandjam

PCPer lets it happen because it drives traffic onto their web pages.

June 1, 2016 | 11:15 AM - Posted by Anonymous (not verified)

seems like amd built a gpu for the mid cycle refreshes of the consoles and are now selling them as stand alone. that is the specs of the leaked gpu in the ps4k

June 1, 2016 | 12:05 PM - Posted by Randal_46

The 480 looks like a great mid-range card with aggressive pricing.

Looking forward to the reviews; hoping it performs just under the 1070 with overclocking headroom.

If so, Raja Koduri will have earned that drink.

June 1, 2016 | 12:13 PM - Posted by mAxius

i for one think amd is crazy enough as they want marketshare! what is the best way to do that undercut competition with a cost effective smaller chip that performs close enough to or better than their $379 part that is harvested from a larger die. 14nm wafers cant be cheep.

June 1, 2016 | 01:05 PM - Posted by Alex Nobles (not verified)

Amd hit this one out of the park!

June 1, 2016 | 01:07 PM - Posted by Anonymous (not verified)

The red team has won. That's if they don't screw up, but that's bound to happen.

June 1, 2016 | 01:58 PM - Posted by Anonymous (not verified)

Add into the cost another $60 for active displayport to HDMI adapters again (and poor longevity / reliability of them) If you have a 3 monitor setup with only DVI/HDMI/VGA connectors. Why can't they release a card with 2 HDMI 2.0 ports, 1 DVI and as many display ports as they'd like? Most PC Monitors still don't have DP (unless you get a high end / 10 bit monitor) let alone 4K TVs. AFAIK, every Samsung 4K set from 2015 have HDMI ports, not DP.

This is not to insult AMD, but to insult AMD and Nvidia because they are both doing this. I understand it's cheaper for them since there is an HDMI royalty, but that royalty is nothing compared to the cost of an adapter and the issues it comes with, I'd happily pay the few pennies royalty for something I can actually use today, not 5 years from now, when the card would be outdated anyway.

On another topic, can the 480 do distortion free single card gaming on 3 monitors like the 1080 can?

June 1, 2016 | 04:27 PM - Posted by Maonayze (not verified)

Why would I buy an active displayport adapter for a card that supports HDMI 2.0 on board?

When I am running from the demons from hell in Doom, the last thing I am worried about is "Ohh are the angles of this corridor trigonometrically perfect on my two outer monitors" FFS.

The 480 probably would be able to do that if it cost another $500

Like Ansel it will probably only appeal to a small percentage of users. Don't get me wrong..it's a nice to have. But not at that extortionate cost.

If I had three monitors (Instead of one nice big decent one) then I would angle them slightly better and save $500.

June 1, 2016 | 09:30 PM - Posted by Anonymous (not verified)

"Why would I buy an active displayport adapter for a card that supports HDMI 2.0 on board?" Because I have a triple display setup of Dell S2740L monitors that only support HDMI/DVI/VGA, so with only one HDMI output, I'd need 2 adapters for the other two monitors. I have bad experiences with them before, with garbled screens, screens changing resolutions because the adapter is going bad, to outright adapter failure, from cable matters brand to star tech, etc... I can see 3 DP's in a few years from now, but not today, almost every monitor and 4K TV support HDMI. Shame on both Nvidia and AMD penny pinching on the HDMI royalty which costs people more money and headaches for non DP multi-monitor setups.

June 1, 2016 | 09:39 PM - Posted by Anonymous (not verified)

One more quick note, the CF demo of Ashes is misleading, the detail level was lower on the CF setup. And that's a CF setup. I'm not Anti-AMD, I loved their Athlons and Opertons, I'm just sick of this good enough stuff from them, are they trying to go the way of Cyrix? This is yet another AMD mainstream product that no one wants -- like waiting on Nissan for the newest Turbo charged Leaf at an auto show. Yes, some do because it's cheap, but it's not. Save the money for the 1080 and amortize over 3 generations and you'll have a better value over this $200 card. My GTX 680 still works very well, but my issue is the 2GB limitation. If AMD released a similar card to the 1080 or better, then everything changes, this 480 changes nothing other than rendering AMD All Mediocre Devices ever since Hector Ruiz left (ugh, if he only bought ATI at the proper price and kept Keller on for the next chip design.)

June 1, 2016 | 11:47 PM - Posted by Bianchi4Me (not verified)

Market for flagship video cars is a tiny subset of a shrinking PC market. Saying that AMD has to beat/match the 1080 to be relevant is like saying the new Mustang has to outperform Ferrari in order for Ford to have a successful product.

This product is specifically intended to be an affordable entry into the VR market, which AMD sees as an under-served area with growth potential. We are taking about people whose entire system cost under $1000. Hard to do that with a $600 level card. Making an AMD equivalent to the 1080, if they could even do it, does not fulfill the needs of users they are targeting with this product.

June 3, 2016 | 12:37 AM - Posted by Anonymous (not verified)

But the point is that they more technologically advanced and produce a better product, from the top HALO card, they create more affordable derivatives. How long did it take for AMD to barely catch up to 970 performance? How much power is wasted over the past few years thanks to AMD and the power consumption their cards draw. Also, the 480 is no Mustang, it's more like a Turbo charged Leaf that no one wanted. If Ferrari produced more affordable derivatives of their car line, (minus the piss poor reliability) I'm sure they would outsell Ford, however, part of Ferrari's sales is because most peeps can't afford them, so it's not apples-to-apples. more like apples-to-caviar.

June 3, 2016 | 10:34 PM - Posted by Bianchi4me (not verified)

So... that means GTX970 or anything slower than the 480 is garbage too, right? Clearly nobody wants any of those cards either. Complete garbage consumers refuse to touch. Except for about 98% of the GPU market, of course.

Your argument that nobody at all will be remotely interested in a card that is faster than a GTX970 at $200 seems odd to me when the 970s are still commanding ~$300, even on sale, despite being officially "obsolete" now.

June 2, 2016 | 10:23 AM - Posted by Anonymous (not verified)

"the detail level was lower on the CF setup"

No it wasn't.

https://www.reddit.com/r/Amd/comments/4m692q/concerning_the_aots_image_q...

June 1, 2016 | 02:09 PM - Posted by Anonymous (not verified)

Yep, AMD just killed the 950/960/970 and maybe even the regular 980. Was expecting maybe $299, but $199 ?!?

That is just straight up batsh1t crazy.

June 1, 2016 | 05:46 PM - Posted by Anonymous (not verified)

I think it is possible for them to have a GTX 1070 at 200 dollars since it's using a smaller process node, and a Dual RX 480 using 51 percent scaling which is barely any better than 1 GPU being used vs a 1080 and it can run better it might be possible that they achieved close to 1070 performance

June 1, 2016 | 07:48 PM - Posted by Butthurt Beluga

I already know a handful of people who already picked the RX 480 as their next GPU because the price point, performance, and power usage is just perfect for them.

Personally not in the same boat as I have a 1000W PSU and a Fury X, but I'm really happy to see a sub $200 card that will truly push all games at 1080p/60 FPS max settings.
And if its better than a 390X or in that range then QHD is definitely an option as well.

Man, mid-range GPUs this year are looking super good.. I can't wait 'til real high end GPUs come out like Vega/1080 Ti.
No real high end GPUs yet... well, other than the high-end-priced-but-mid-range GTX 1080.

Good stuff, lots of cool stuff to look forward to this year.

June 2, 2016 | 01:53 AM - Posted by Simon (not verified)

To those who toy with the idea of a 1070 price cut, or the 1060 beating it overall, its irrelevant... Here's why:

I can't see myself investing in a non fully DX12 compliant part. Considering that the next time fourfold ( I've never upgraded for anything less in 20 years ) increase comes around it'll be in what, 8, 10, perhaps 12 years baring a groundbreaking revolutionary discovery? It has been 8 years since last time at the 200-300$ pricepoint.

DX11 will be irrelevant in a few years, all consoles are already running DX12-like code, async is getting used more and more according to insider sources. I have to plan almost a decade in advance, in this regard, buying a 1060, or even a 1070 or 1080 would be pointless. And that's not even taking into account that devs will be used to code for proper DX12 async, so even when they'll do PC exclusives, it'll be a habit.

Iows, DX11 is already irrelevant, the upgrade cycles have become way too long for this to not be painfully obvious. Nvidia shot hitself in the foot, big time, and the savvy users are seeing it, Anandtech was the first to point it out, then others tested the more than 4 cores theory and it turned out correct. And its not a driver overhead issue, it has been tested for, Nvidia's driver overhead is actually better, but software preempt, async "emulation", is what kills it in DX12 for future proofing.

It probably comes down to something about changing their architecture too much to be on time, or perhaps changing it to be fully DX12 compliant would've hurt the clock rate scaling and the perf they could've squeezed in this gen in DX11, aka most actual games, out of the die shrink. No matter, for my needs, it was the worst thing they could do.

People act like its not a big deal, it IS a big deal for people who don't upgrade often, wants future proofing + perf/price ratio.

I wished I could go for a Nvidia part this round, but seems like I'll stick with team red despite wanting to get back to up to date third party programs like inspector, RadeonPro software is kind of a joke, forcing AA or AO or whatever on AMD hardware right now is a PITA. But sadly, Nvidia fucked up and is really giving me no choice.

The only hope for Nvidia right now if they want savvy PC users money who look to the long term and are set to upgrade this year, is if they actually release a fully DX12 compliant, at least midrange part, that can match the Polaris 10 in perf, at the same price, and perhaps less power usage.

As for me, my wait is over, P10 provides the four fold increase, the power usage is lower than my current GPU, its fully DX12 compliant and the price is right, heck, even lower than my last buy which was more around 275-300$.

All in all, considering all this, it was a very smart move on their part to put the focus on DX12 and async.

June 2, 2016 | 03:03 AM - Posted by renz (not verified)

Dx11 is irrelevent? You'll be surprise when there is new game still develop using DX9. don't look at the triple A stuff only. Look at the whole scene including those indie title. The thing most peoplw don't understand with DX12 is not meant to replace dx11. It is for those that want low level api. For those that not willing to deal with low level complexity (developer with tight budget) dx11 is there for them to give them high level api. That's why MS still update DX11 like those FL11_3 which bring similar felature to FL12_1.

As for async compute the feature might not end up being use on pc unless they were sponsored to use it. Some people probably thinking the async optimization on console will carry over to pc. Well it is not. Hitman dev mention that async need to be tweaked per card basis. Go take a look at guru3d recent test on hitman. Even radeon will take performance hit with dx12. And then look at various DX12 title out there. Most often DX12 version of the game have far more stability issues that it's DX11 counter part.

June 2, 2016 | 12:09 PM - Posted by Simon (not verified)

You're really telling me with a straight face that in my current upgrade cycle ( the next 8 to 12 years ), DX11 will still be dominant? Yeah... sure...

June 3, 2016 | 12:42 PM - Posted by renz (not verified)

they will not. but most likely there will be another high level API that will replace DX11. but DX12 will not be the one that replacing DX11. so don't expect 2 or 3 years from now most game will be running DX12.

June 3, 2016 | 04:45 PM - Posted by Simon (not verified)

Why the heck would it not replace DX11? Every DX version before this one replaced the one before it and "surprise", Vulkan, the only DX competitor also uses proper Async and not some software/driver level preempt hack. If you're referring to resistance to W10 adoption, its a non-issue, as it has been for a DX predecessor came Vista if you remember well, it gained widespread adoption despite it and we're all running it now.

Not a single thing you said is relevant. Going for a card that supports DX12 and Vulkan and/or the hypothetical DX11.3 Async properly at this moment is the best option for a long-term upgrade, bar-none. And in this regard P10 is the best contender so far in the midrange.

1060 is a short term option at best if it doesn't support proper Async. If 1160 supports proper Async then it'll be a good long term option for those set to upgrade in the midrange during 2017, but for now, P10 all the way.

June 2, 2016 | 12:18 PM - Posted by Simon (not verified)

And indie games... be serious, I haven't seen a title yet that pushes graphics even just a wee bit ( perhaps there are, I don't play them ). You can run most, if not all, indie games maxed at 1080p and sometimes 1440p, or even 4k ( or equivalent with dynamic res/downsampling ) in some extreme cases with 8 years old GPUs. Doesn't matter that they're DX9-10-11 at this point. It has no bearing whatsoever on futureproofing and what will happen during the next 8-12 years.

As for MS hypothetically updating DX11, it doesn't matter if Nvidia's hardware doesn't support it other than perhaps in software with driver emulation like what's done in DX12 right now, they'll still take the same perf hit. And this won't change with the same hardware 8-12 years down the line.

June 2, 2016 | 06:57 AM - Posted by lucas.B (not verified)

A lot of people getting sensitive.
Let's face it AMD or Nvidia, what matter is perf and price.
We'll see then how good is this 480 and if there is a 480X to match the 1070.
Just take what suit your wallet, 200 then 480, 300 and above 480x or 1070, above 1080.
But for most people with 970 980 980ti/Titan or 390x and fury (X) our rigs are not obsolete! And yes if we want something better, 1070 or 1080 is the solution tight now.
I'm still with my 4 years old hd7970!!
But I think it would be better to wait on Vega and 1080ti maybe TITAN (YZ). We'll get good perf for the money! And something that last!

June 2, 2016 | 07:05 AM - Posted by Chad (not verified)

I think 199 or 230 is a pretty good deal but i wish amd wouldve brought out a 490 instead of the 480. But for i little cheaper than a 1070 you can buy 2 480 8gb's which is appealing if they have good drivers and no micro stutter. But im getting a evga 1070 because ive always bought evga on the nvidia side and never had any problems. And the new cards match my rig good being black and white.

June 2, 2016 | 04:36 PM - Posted by Anonymous (not verified)

Nvidia will drop their prices and I bet these wont take it to market at that price.

Anyone know if these cards will work well with revit or 3d studio max to speed up rendering?

June 5, 2016 | 10:20 PM - Posted by mathirian (not verified)

If this isn't UNAMGIUOUSLY faster than the 390 (which is only ambiguously faster and often slower than the 970), and there will be no 490 out within a few weeks to compensate for it, I'm afraid this new scheme of AMD's will be a major letdown, at least for anyone looking to buy NOW.

June 6, 2016 | 07:41 PM - Posted by Anonymous (not verified)

Really ? So what would you suggest I buy 'now' for $199 ?

I'm open to suggestions. (As long as it's not second hand)

June 7, 2016 | 07:03 AM - Posted by Anonymous (not verified)

Lets hope those new linux driver's that AMD has in beta are worth the effort. Otherwise this card is worthless to those using linux. Nvidia is bad enough, but AMD's drivers are pathetically bad.

Hoping to go Zen with a nice 480 top it off...if the drivers hold up.

June 9, 2016 | 09:48 PM - Posted by darknas-36

If nvidia wants a piece of AMD's pie on budget they need to make a GTX 1070 4GB variant for $199 dollars, because right now they are only offering the 1070 and 1080 in 8GB only. So if nvidia get their marbles together they need to release GTX 1070 4GB edition or make the GTX 1060 $199 dollars and have the same performance as the titan X.

June 10, 2016 | 09:15 PM - Posted by Anonymous (not verified)

wow! a little bit of wee came out when I saw this article!

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.