AMD RX 480 (and NVIDIA GTX 1080) Launch Demand

Subject: Graphics Cards | June 30, 2016 - 07:54 PM |
Tagged: amd, nvidia, FinFET, Polaris, polaris 10, pascal

If you're trying to purchase a Pascal or Polaris-based GPU, then you are probably well aware that patience is a required virtue. The problem is that, as a hardware website, we don't really know whether the issue is high demand or low supply. Both are manufactured on a new process node, which could mean that yield is a problem. On the other hand, it's been about four years since the last fabrication node, which means that chips got much smaller for the same performance.

View Full Size

Over time, manufacturing processes will mature, and yield will increase. But what about right now? AMD made a very small chip that produces ~GTX 970-level performance. NVIDIA is sticking with their typical, 3XXmm2 chip, which ended up producing higher than Titan X levels of performance.

It turns out that, according to online retailer, Overclockers UK, via Fudzilla, both the RX480 and GTX 1080 have sold over a thousand units at that location alone. That's quite a bit, especially when you consider that it only considers one (large) online retailer from Europe. It's difficult to say how much stock other stores (and regions) received compared to them, but it's still a thousand units in a day.

It's sounding like, for both vendors, pent-up demand might be the dominant factor.

Source: Fudzilla

Video News


June 30, 2016 | 08:34 PM - Posted by StephanS

AMD makes around $40 profit per RX 480, nvidia around $450

Its to bad AMD was stuck on a bad 14nm process... because if Polaris clocked at 2ghz like pascal. the RX 480 would have been a KILLER card. 70% faster, same die size and power usage. AMD could have asked $600 for the exact same RX 480 at that clock speed.

June 30, 2016 | 09:17 PM - Posted by Butthurt Beluga

From what I understand there's nothing so far that would indicate a poor process yet, and the reason why AMD cards don't clock like Nvidia cards do is because their chips are much more dense - AMD cards also scale MUCH better with overclocks than Nvidia cards do.

MOAR JIGGIEHERTZ does not necessarily equal actual performance because of how the chips are designed.

June 30, 2016 | 09:29 PM - Posted by remc86007

I agree; look at AMD vs Nvidia chips in compute tasks. Nvidia clearly strips their architectures down to the bare minimum for gaming. Raja said during the interview that Polaris was designed with a focus on getting Polaris 11 ready to ship in notebooks by "back to school" time 2016. I don't know the complexities of how this stuff works, but I imagine the mobile first design direction required a lot of choices that necessitated a slower clock speed.

I'm buying an RX480 and giving my wife my 970, but I'll likely upgrade to Vega when it comes out. I've been really impressed by AMD over the past year and I hope they can continue to become more competitive.

June 30, 2016 | 10:43 PM - Posted by Anonymous (not verified)

AMD's consumer GPUs have always had more compute resources so they run hotter and consume more power! But some people use GPUs for more than gaming and having consumer SKUs that can have a little more Memory for higher resolution textures and higher polygon count models and scenes without having to go to the professional level/costly graphics SKUs means the there is a more affordable choice for people who use Blender 3d, and other open source graphics software.

I will be looking at the Bristol Ridge/AM4 systems and using dual RX 480s for Blender Cycles rendering on the GPU. So the CPU will not be that much of a factor for my rendering workloads. There is already better Blender Cycles rendering support for AMD's GCN GPUs going forward and AMD has open sourced a lot of the fire-render code for Ray Tracing acceleration on its GPUs. Also the gaming oriented gaming graphics(that High frame rate low latency) rendering, is not a concern for people doing non gaming graphics workloads where the workloads are mostly many minutes per frame, and the scenes have many high polygon count mesh models. So for people that are doing graphics workloads, those extra GCN ACE async-compute resources used for high quality ray tracing calculations accelerated on the GPU will be greatly utilized.

It's not all about only having the limited resources necessary to run the games and no more than that on those Nvidia SKUs. I do more than game so I want the extra compute resources that AMD offers, because even the most expensive Intel Xeon processor is a very poor performer for graphics/rendering workloads without the help of a GPU. That ray tracing acceleration on the GPU is going to free up the need for any expensive CPU cores for ray tracing rendering workloads, and save people a lot of money.

Nvidia has never been interested in providing in its consumer SKUs any extra compute ability other than what is needed for gaming, but even gaming, VR gaming especially, that is starting to make use of the extra async-compute for games and it shows on AMDs GCN GPUs having better DX12/Vulkan performance.

July 1, 2016 | 05:22 PM - Posted by Anonymous (not verified)

GTFO... if you render 3D you probably have an excuse to write off your video card as part of your business.

You don't buy crap for this.

Why in the world would you spend $200 on a videocard now? VR isn't in stone. We know this card can't do 4K. Playstation Neo and xbox scorpio will probably run games better than these cards because all that shit you talked about how amd is so built to do soo soo much but then runs on top of a shitty platform... well guess what? the consoles are built to run games as they were intended.

You absolutely need a card, this is a decent short term solution. Nothing else.

These assholes... Amd & Nvidia are just milking you suckas

July 1, 2016 | 06:58 PM - Posted by Mark_GB

So please tell us what your better solution is, We are all ears here!!

July 1, 2016 | 10:14 PM - Posted by Anonymous (not verified)

The post is not talking professional rendering, and Blender 3d is free, so it's great for students/learning on a budget. It's also great for independent games creators to use the RX480 for texture creation. A lot of textures are created on high resolution/high detail models and those textures rendered using ray tracing, AO, and sub-surface scattering, and lighting set up on the highest sample rates to create very detailed textures that then can be applied to low detail gaming mesh models, to give a more realistic look for some games. So maybe having 2 or 3 RX480s doing rendering/ray tracing for low costs is the only way for some who can not afford to more pricy Nvidia hardware, the really really pricy Nvidia pro hardware, or the AMD pro hardware.

That FireRays/FireRender software/middleware open sourcing by AMD is going to be good for those types of workloads for people who do not have the funds for the pricy pro hardware.

And VR has plenty of Funding. Also there will be more development going towards Vulkan than DX12, because Vulkan is what mobile gaming will be using and not DX12. Those gaming models and gaming textures do not grow on trees, and for many independent games there simply is not the budget for all those expensive workstations. The way to go for affordable Ray Tracing rendering is acceleration the ray tracing calculations on AMD's GCN ACE units instead of using CPUs that are too high cost for most users. So with Blender's Cycles rendering being done on the GPU, and not so much on the CPU, much higher Ray Tracing settings can be run and create a detailed image/texture in minuets instead the of the hours it take a CPU to do the same work.

All of the Open Source 2d graphics software can now make use of the GPU to accelerate filters on the GPU, and even the spread sheet software and office software makes use of GPU compute acceleration. With the Vulkan API using SPIR-V there already is support for high level programming languages being able to generate SPIR-V based IL instructions and run on the GPU, both Vulkan and Open-CL use SPIR-V to accelerate Graphics and Compute on the GPU. So the more affordable AMD GCN based GPUs ACE units can be used by all types of software and programming languages for more than just graphics workloads.

July 1, 2016 | 04:29 AM - Posted by Anonymous (not verified)

" and the reason why AMD cards don't clock like Nvidia cards do is because their chips are much more dense"

RX 480 die size: 232mm^2
GTX 1080 die size: 314mm^2

Rx 480 transistor count: 5700 Million
GTX 1080 transistor count: 7200 Million

RX 480 transistor density: 24.6 million/mm^2
GTX 1080 transistor density:: 22.9 million mm^-2

It's a slight areal density increase, but nothing dramatic, and not sufficient to explain the perf/watt disparity.

The difference in effective perf/watt is more down to architecture implementation improvements: Nvidia doubled down on optimising Maxwell for 28nm, and this experience with squeezing extra efficiency out of an existing process has paid dividends in making Pascal more efficient than if 16nmFF+ had been available earlier. AMD appear to have gambled on getting 14nmFF earlier to deal with scaling issues, and the continued process delays have not been kind to them in this regard, with the RX 480 having an extremely similar performance and perf/watt to the Maxwell era GM204.

July 1, 2016 | 09:56 AM - Posted by tatakai

yeah, just a couple million more transisters per mm^2 ....

Nvidia cut down their chips and having a larger less dense chip is bound to help on power consumption.

July 1, 2016 | 11:07 AM - Posted by Anonymous (not verified)

It may also be that AMD never had what nvidia did, a maxwell redesign.

GCN4 is an evolution of GCN 2, but pascal is an evolution of maxwell, which was their OG sub 20nm design. AMD hasnt had that redesign, so their chips are at a disadvantage when it comes to complexity and power draw. Nvidia already optimized that stuff on 28nm, so they could spend the entirety of 16nm improvements on higher clock speed and lower power draw, while AMD had to do everything with a smaller budget and a more complicated arch.

July 1, 2016 | 12:30 PM - Posted by Anonymous (not verified)

Really Nvidia comes up with a totally different marketing name each generation for their GPU computing resources removing/gimping other incremental changes/regressions across Nvidia's "generations"! And the press takes to its own Press Invented 1.1-1.3 naming convention for AMD's GCN generations and you infer from the press derived naming conventions of AMD's GCN SKUs over the years that they are only incremental improvements. AMD has never had an official naming scheme for their GCN graphics generations, and just because Nvidia uses a new marketing name for each of their "New” generations you think that Nvidia has more generational than improvements than AMD does.

All of this DX11 benchmarking being done will have to be redone, and Nvidia did spend more on its driver tweaking for DX11, but the DX12/Vulkan APIs reflect AMD's investment in more than just drivers. AMD's Mantle project is directly responsible for both DX12's/Vulkan's inner workings and their closer to the metal design, and AMD's current investments in driver R&D is paying off. The gaming/graphics APIs and gaming software ecosystems are now starting to make use of AMD's GCN/Async compute features so the benchmarks of today under DX11 are already out of date for what will be coming over the next year for the new gaming titles that are optimized for DX12/Vulkan. And your conclusions will have to wait for more data to be collected over the next year.

Really AMD's movement from the TeraScale VLIW and instruction level parallelism GPU micro-architecture to the GCN NON-VLIW/RISC GPU micro-architecture and its Thread level parallelism was very drastic change, so that is where that most change happened and AMD has made plenty of GCN improvements over the last 4 generations of GCN. And this includes not stripping out most of the compute on their consumer SKUs like Nvidia stripped out more of their compute on Nvidia's consumer SKUs. Nvidia has used this compute stripping/removing on its consumer SKUs to push its low power using marketing mantra, and make Nvidia's consumer SKUs more segmented towards only gaming usage, while AMD has gone with keeping in its consumer SKUs more of the compute features that are of use for other GPU acceleration uses than only gaming.

So consumer users of AMD GCN based GPU can do other types of graphics workloads, and other GPU acceleration workloads and not have to have purchase the more costly pro GPUs. Now, AMD has even improved their GPU's power usage metrics into a more competitive range with Nvidia's products, all while not stripping out that extra compute. So with the VR games beginning to use that extra compute for gaming, and even the non VR games are using that compute, Nvidia finds itself behind again, because DX12/Vulkan will allow for more gaming compute to be done on the GPU. Along with the graphics compute. Nvidia's async compute not fully implemented in its GPU hardware deficiency is real, and this deficiency is already showing up in some DX12/Vulkan optimized games.

July 1, 2016 | 05:33 PM - Posted by Anonymous (not verified)

GTFO.... We've all seen this before. You are talking about DirectX12 as if DirectX8,9,10,11 never happened.

BUY FOR THE NOW KIDS

Your shit is going to be considered old in 2 years. You really think 970GTX like performance is going to be Recommended Hardware Specs after its been on the market for 4 years?

July 1, 2016 | 10:53 PM - Posted by Anonymous (not verified)

Who cares about DX12, it's Vulkan that will be on more devices/OSs/systems, Who gives 1/10th of a Rats Shiny Red A$$ about any DX version, as Vulkan is now online with the very same/better feature set than any DX version. For M$ and windows 10, and that DX road to serfdom GTFO your days are numbered! Vulkan will be on all the Table/Phone SKUs in the same version and the Vulkan on the PC/laptop SKUs. Screw M$ and Nvidia's closed gaming ecosystem graphics APIs/middleware! It's Vulkan for the cross OS/Platform graphics API support for the many more billions of devices not tied to M$'s or Nvidia's fortunes!

Those ARM Mali/Bifrost based GPUs, and IT PowerVR GPUs will be running on mostly the Vulkan graphics API, as will PC/Laptops not tied to M$'s closed UWP ecosystem.

July 2, 2016 | 10:23 PM - Posted by Anonymous (not verified)

Still waiting for Mantle/Vulkan to really give us improvements promised so long ago. It should make sense, all consoles use AMD, why haven't ports been able to use this advantage?

July 1, 2016 | 11:35 AM - Posted by Anonymous (not verified)

Yes Nvidia's consumer GPUs are for gaming graphics only, while AMD's consumer GPUs have a bit more compute for other uses. But watch out Nvidia, even games are beginning to make use of that extra compute on AMD's GPU's for gaming, so let's see which GPUs are able to make better use of DX12/Vulkan as time passes. There should be more interesting benchmarking ahead as more games are optimized for the new graphics APIs.

AMD should be helping the games makers make use of their GCN 1.3/4.0 ACE units to take some of the gaming compute workload off of weaker CPUs, so users can build even more affordable VR ready gaming systems. Eventually explicit multi-adaptor will allow for even better multi-GPU load balancing with AMD CUs able to be purposed by the hardware schedulers/games for compute workloads. So on future DX12/Vulkan optimized games titles maybe 2 GPUs can be tasked with mostly graphics workloads and a third GPU could be tasked with mostly compute workloads by the game and have less dependency on the CPU for many gaming workloads. Once the GPU load balancing can be moved out of the drivers and into the Graphics APIs/OSs and be more under control of the game/gaming engine, there can be industry wide development and tweaking of multi-GPU usage on gaming systems. so I expect that much improvement on the Multi-GPU/graphics adapter scaling will happen. The VR gaming market will drive a lot of the R&D around having better multi-GPU graphics load balancing/scaling efficiency across many GPUs as well as moving more of the gaming compute onto the GPU/s.

July 1, 2016 | 05:38 PM - Posted by Anonymous (not verified)

GTFO

thats a lot of writing AMD. You should spend more times optimizing your cards so they just beat Nvidia and you don't have to make up features that no one cares about.

The RPI can do everything you are talking about while you are asleep during the night.

Don't talk VR and then start talking about compute.

July 1, 2016 | 06:39 PM - Posted by Luthair

I'm starting to think Lisa Su is pegging someone's dad.

July 1, 2016 | 10:40 PM - Posted by Anonymous (not verified)

Here we go with the Gaming Gits, they think GPUs are only for gaming, and they will pay with their first born(web footed child) to get Nvidia's gimped/overpriced hardware.

Hay Gaming Gits, those gaming models and high resolution textures do not grow on trees. VR uses gaming compute accelerated on the GPU, and a lot more gaming compute traditionally done on the CPU will be done instead on the GPU to cut down on that latency. There will be a lot more gaming compute moved to the GPU that was traditionally done on the CPU to get VR gaming more responsive. And Nvidia's async compute Kludge in software will not be responsive enough for VR gaming workloads. Nvidia is even trying to restrict its Pascal micro-architecture's more fine grained GPU thread scheduling/dispatch to CUDA only code, so it will not be available to OpenCL based software. That's just more vendor lock-in on Nvidia's part as usual.

The bit-coin miners sure liked AMD GCN GPUs before the bit-coin ASICs where brought to market, and AMD always shows more concern for other GPU usage for its consumer SKUs. Nvidia continuously tries to further segment its consumer GPU SKUs to gaming only workloads, while forcing its users to have to purchase it's more pricy SKUs if they want to do non gaming workloads on their GPUs.

June 30, 2016 | 09:56 PM - Posted by tatakai

even if its a rough start for the 14nm process, it will get better. This actually gives AMD a lot of flexibility on future product releases. eg. 485 or some other 480 card with much higher stock clocks and similar power consumption.

Nvidia's profits might not be that high on the card and AMDs not that low. Considering the low stock, it shouldnt be that high for nvidia.

June 30, 2016 | 10:03 PM - Posted by Anonymous (not verified)

The RX 480 is a mainstream SKU, and the AMD Vega will be the competition for the GTX 1080! So it will have to wait for Vega! And Some retailers selling out of their first stocks of any new card is not a good indicator of how many total units will be sold over the longer run! Time and price will be more on the RX480's side in the longer run for total numbers of cards sold compared to the GTX 1080, as some users will get dual RX 480 aftermarket cards with an 8 pin plug and better cooling and higher overclocking ability.

The RX 480's smaller die size nets AMD more dies per wafer, so that economy of scale will be on AMD's side relative to Nvidia's larger dies. The Smaller denser circuit packed RX 480 dies may run hotter but the aftermarket SKUs will lower that heat down, and less heat means less leakage provided there is enough cooling on the aftermarket cards.

I will be looking at the Aftermarket RX 480's and their ability to be overclocked and in dual configurations they may just get damn close to what a GTX 1080 can offer, and the Dual aftermarket RX480s will still cost less than a GTX 1080. The real sleeper configuration may just be the RX 470 in a dual configuration in an Aftermarket card under $200. The RX 470 will be a binned RX 480 part with less graphics/compute resources and generate less heat, but it just may be able to be clocked a little higher than a RX 480 and the RX 470 will probably have the best price/performance metric, even compared to the RX 480.

I think a lot of the issues with the 480 will be worked out with the aftermarket GPUs offering an 8 pin power solution and much better cooling solutions, and the SKUs that are actually tuned for overclocking will get into the GTX 1080 range for some interesting choices for gamers once all the aftermarket solutions arrive.

AMD probably went with GF's(licensed from Samsung) 14nm process to get that more dies per wafer metric to aggressively go after that mainstream market that provides the most revenues at less of a profit margin. AMD will make up for that lower profit margin by targeting the mainstream market segment that sells the most units. It's much a faster way of amortizing the R&D costs for AMD to have a larger sales volume so the R&D costs can be spread across more units in a smaller amount. So total unit sales volume is what allows for the R&D investment to be quickly reduced/recouped allowing AMD to have a higher profit margin over time without raising prices on the RX 480/other RX 400 series SKUs. AMD will also be able to have some latitude to lower its RX 400 series pricing once those R&D expenses are fully amortized/paid off for the for the initial RX 400 series parts' R&D. This was AMD's explicit strategy all along for its RX 480/RX 400 series mainstream offerings.

June 30, 2016 | 10:44 PM - Posted by pdjblum

Nice to hear from a voice of reason and intelligence without an agenda. I can only say I am disappointed with Raja for not addressing the power issue, which he had to be aware of, before the release. I have tremendous faith in his ability to turn this company around in a big way, along with Lisa Su, but part of that faith is based on their being honest, and under-promising and over-delivering. My faith is backed by my hard earned cash in a very big way. I just can't help feeling that this misstep was done thinking no one would notice, which is disingenuous at best.

July 1, 2016 | 02:24 AM - Posted by Anonymous (not verified)

These are teething issues, they need to do some BIOS tweaking for the reference design to put more of any overdraw onto the 6 pin connector and less on the motherboard side. The 6 pin's rating has a little more latitude for over draw. That said the cooling solution is not as robust on the reference designs, and there probably is some variance on the GF/Samsung 14nm process that makes some dies perform worse than others and take more power when they are hot. The software/firmware is supposed to make some adjustments to each die to keep each die operating inside the specifications.

There is a vicious cycle on an processor die/power delivery circuitry where heat is involved, and if the cooler is not good at quickly removing heat then that excess heat can not be removed properly and promptly and the circuits will leak more as the heat increases and generate more heat that feeds back into the thermal dissipation process.

Raja is not running AMD, and he can only do so much as far as a new product's release, AMD has a Board of Directors and investors that Lisa Su has to answer to and Raja has to answer to Su. Raja may be over RTG, but RTG is still a part of AMD, and most of Raja's duty is over the GPU product development and the managing of that division. There have been AMD comments on the issue from AMD's representatives that probably report directly to Raja, but he will probably not be commenting, or commenting very carefully if he does comment directly, until the full solution to any problem is found. IF you are worried about the reference card issues then wait for the aftermarket cards that will provide 8 pin or better power options that can take more of the stress off of the PCI/motherboard power delivery side.

I personally would never overclock any card with a 6 pin power connector and a wattage/power profile that is so close to taxing the motherboards PCI power delivery maximum along with the 6 pin's maximum, especially on an GPU fabricated on a brand new immature 14nm process node. When I see the word refrence card along side of the word new then I always stop and think. And add to that a brand new fabrication process node for the processor/GPU, well I'll wait for the aftermarket solution.

July 4, 2016 | 03:50 AM - Posted by iru (not verified)

exactly my thoughts. However for the last half a year i have been playing around with AMD stocks and I undrstand why they can not undersell their products in their official broadcasts. The problem is that stock people (to whom they have an obligation to deliver good margins and higher stock prices) often only read the official stories and sales numbers. Saying that they will be releasing a mediocore product but then actually giving a good product will plummet their stocks and currently they need good stocks to pay off their large depts. I have faith in Lisa and Raja however, will be interesting to see where the company will be in 5 years.

July 1, 2016 | 07:16 PM - Posted by Mark_GB

I believe thast AMD went with GloFo because GloFo could produce more platters over the long term than TMSC had the capacity for. TMSC got Apple as a client a few years ago, and Apple buys so many platters per month that any fab, except maybe Intel, would be pushing its limits just trying to deal with that incredible Apple volume.

July 1, 2016 | 12:20 AM - Posted by Anonymous (not verified)

Where are you getting these figures from?

July 3, 2016 | 02:31 PM - Posted by Anonymous (not verified)

His nether-regions.

July 1, 2016 | 03:49 AM - Posted by JohnGR

I wouldn't say that AMD makes only $40. The card looks really simple and designed to be cheap. Other than the Polaris GPU, an RX 480 could be cheaper to made compared to a 4GB custom R7 370 for example, that sells for less than $130.

http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007709%20...

June 30, 2016 | 09:31 PM - Posted by remc86007

On the topic of the post: I'd hazard a guess that AMD's 480 alone will outsell NVidia's 1070 and 1080 2:1.

June 30, 2016 | 09:34 PM - Posted by leszy (not verified)

But 1k 1080 was sold over one month and RX 480 in two days.

June 30, 2016 | 10:48 PM - Posted by Anonymous (not verified)

You beat me to it.

There's quite a difference between 1 month for the 1080 and 1 or 2 days for the RX 480. AMD did in fact stockpile quantities of RX 480 worldwide, and it got bought up like crazy.

July 1, 2016 | 10:56 AM - Posted by Anonymous (not verified)

Given that the 1080 is still sold out as soon as stock is available, I would be confident in saying that if they had the same amount of 1080s at launch as they had RX480s, they'd have sold out as well. The demand for 1080s is enormous.

July 1, 2016 | 05:40 PM - Posted by Anonymous (not verified)

crazy... is thousands in your world...

PS4 has 40 million users

July 1, 2016 | 07:39 PM - Posted by Anonymous (not verified)

ROTFL... fighting for crumbs alright! XD

July 2, 2016 | 01:56 PM - Posted by leszy (not verified)

I mean at OCUK exactly.

July 1, 2016 | 12:27 AM - Posted by Orthello (not verified)

Yes that time scale is a little different lol. Still big price gap too so you would expect less buyers at the Funders editions pricing.

Although i'd recommend the AIB 480 cards for enthusiasts i'm glad to see AMD sold out with their huge numbers of the Ref card.

July 1, 2016 | 07:16 PM - Posted by arbiter

reports said AMD had 8000 cards for US market, nvidia just gtx1080 had around 10000. Havn't heard how long took amd card to sell out but know heard nvidia cards sold out very fast like an hour at most.

July 1, 2016 | 07:40 PM - Posted by Anonymous (not verified)

That was just in part baseless [H]ardocp rumor FUD.

July 2, 2016 | 04:09 AM - Posted by Lurf (not verified)

To me the RX480 isn't an exciting card at all. It's about on par with my 970 ("whooptie......doo"). Sure it drives prices down and brings more performance to the masses. That's all well and good but I want the max. performance available right now. Mostly for my Vive, but also high-end ("normal") gaming. That's why it's a bummer that only reference 1080's are available right now (why on earth would any1 buy that...the cooler looks/sounds terrible?).
In short, I would be a 1080 AIB in an instant if I could. But there's no stock... I bet there are many more people like me. What's more the 1070 is already out and the 1060 is on it's way. All in all I think Nvidia has the more interesting line-up. From far superior performance with the 1080 all the way down to the lower prices 1070 and soon the 1060/1050's. It's good that AMD is here to drive competition, but it's very unlikely I'll buy one of their cards anytime soon. But that's just me.

July 2, 2016 | 07:11 AM - Posted by JohnGR

Then go and buy a Founders Edition. And if you can't find a cheap one go and buy a 1080 at $800. What do you expect from AMD? To go and create a 600mm2 chip on GlobalFoundries who couldn't create their own 14nm and had to implement Samsung's, so that you can find a cheaper Nvidia card? You want performance. Go and pay Nvidia. Stop winning and PAY. You can't love Nvidia and get away cheap at the same time. PAY.

July 2, 2016 | 11:45 PM - Posted by -- (not verified)

They want a 1080 for 199.00

they are the democrats of the computer world. GIVE ME EVERYTHING...cost nothing.

July 3, 2016 | 04:48 AM - Posted by leon (not verified)

Yes we would all love to have the best,
I have allways been an nvidia/intel fan

But have to admit there is a vast majority playing games on aging hardware in the gtx bracket and tons playing on gt level cards that will only be getting a new card at the 100-200$ range

And thanks to amd they will finaly get decent fps and quality
At that price

All of this fan boy arguments of your toyota is not as fast as my beemer is just pathetic
If it was priced like a beemer by all means go ahead, but it is not

Yes the the pci power problem would put me in the wait and see group before buying

Like i said been an nvidia fan, but your beemer priced item should then perform like a beemer on dx12 titles too

Surely the guys that can't upgrade often have to consider how future titles will run

With newer maxwell nvidia hardware not matching very old amd tech
In dx12 is not helping

And for the budget buyer his screen resolution will be 1080p anyway and he was waiting to buy a second hand 970 now for lower price he can get a new card for less that is much better at dx12 no brainer

July 3, 2016 | 11:11 AM - Posted by remc86007

This is an important point. I know Ryan mentioned the significantly better DX12 performance of the RX480, but I think many reviewers failed to point it out.

I sold my 970 on ebay for the same price I bought the RX480 at. Now I have a new card, with a new warranty, and I can finally play Quantum Break on my ultra-wide at 60FPS. My 970 was only getting 30-35fps even after turning down many settings. That fact alone would be tremendously unsettling for me if I had bought a 1070 or 1080. It's not like Nvidia is going to be able to convince developers to not use asynchronous compute because all the major consoles for the foreseeable future are going to be running GCN.

Another thing that bothered me about a lot of the reviews was the comparison of the RX480 reference card to aftermarket cooler 970s that were purchased very recently. I bought a 970 the day of launch from EVGA and paid the premium to get a factory overclocked model, and I couldn't push it over 1404Mhz without it freezing. Maybe I just had a bad card, but I think the first hardware revisions of those cards (the ones many people will be picking up on ebay now) simply can't overclock like the 970s made in the last year.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.