Review Index:

The AMD Radeon R9 Fury X 4GB Review - Fiji Finally Tested

Manufacturer: AMD
Tagged: 4GB, amd, Fiji, Fury, fury x, hbm, R9, radeon

A fury unlike any other...

Officially unveiled by AMD during E3 last week, we are finally ready to show you our review of the brand new Radeon R9 Fury X graphics card. Very few times has a product launch meant more to a company, and to its industry, than the Fury X does this summer. AMD has been lagging behind in the highest-tiers of the graphics card market for a full generation. They were depending on the 2-year-old Hawaii GPU to hold its own against a continuous barrage of products from NVIDIA. The R9 290X, despite using more power, was able to keep up through the GTX 700-series days, but the release of NVIDIA's Maxwell architecture forced AMD to move the R9 200-series parts into the sub-$350 field. This is well below the selling prices of NVIDIA's top cards.

View Full Size

The AMD Fury X hopes to change that with a price tag of $650 and a host of new features and performance capabilities. It aims to once again put AMD's Radeon line in the same discussion with enthusiasts as the GeForce series.

The Fury X is built on the new AMD Fiji GPU, an evolutionary part based on AMD's GCN (Graphics Core Next) architecture. This design adds a lot of compute horsepower (4,096 stream processors) and it also is the first consumer product to integrate HBM (High Bandwidth Memory) support with a 4096-bit memory bus!

Of course the question is: what does this mean for you, the gamer? Is it time to start making a place in your PC for the Fury X? Let's find out.

Continue reading our review of the new AMD Radeon R9 Fury X Graphics Card!!

Recapping the Fiji GPU and High Bandwidth Memory

Because of AMD's trickled-out offense with the release of the Fury X, we already know much about the HBM design and the Fiji GPU. HBM is a fundamental shift in how memory is produced and utilized by a GPU. From our original editorial on HBM:

The first step in understanding HBM is to understand why it’s needed in the first place. Current GPUs, including the AMD Radeon R9 290X and the NVIDIA GeForce GTX 980, utilize a memory technology known as GDDR5. This architecture has scaled well over the past several GPU generations but we are starting to enter the world of diminishing returns. Balancing memory performance and power consumption is always a tough battle; just ask ARM about it. On the desktop component side we have much larger power envelopes to work inside but the power curve that GDDR5 is on will soon hit a wall, if you plot it far enough into the future. The result will be either drastically higher power consuming graphics cards or stalling performance improvements of the graphics market – something we have not really seen in its history.

Historically, when technology comes to an inflection point like this, we have seen the integration of technologies on the same piece of silicon. In 1989 we saw Intel move cache and floating point units onto the processor die, in 2003 AMD was the first to merge the north bridge and memory controller into a design, then graphics, the south bridge even voltage regulation – they all followed suit.

View Full Size

The answer for HBM is an interposer. The interposer is a piece of silicon that both the memory and processor reside on, allowing the DRAM to be in very close proximity to the GPU/CPU/APU without being on the same physical die. This close proximity allows for several very important characteristics that give HBM the advantages it has over GDDR5. First, this proximity allows for extremely wide communication bus widths.  Rather than 32-bits per DRAM we are looking at 1024-bits for a stacked array of DRAM (more on that in minute). Being closer to the GPU also means the clocks that regulate data transfer between the memory and processor can be simplified, and slower, to save power and complication of design. As a result, the proximity of the memory means that the overall memory design and architecture can improve performance per watt to an impressive degree.

View Full Size

So now that we know what an interposer is and how it allows the HBM solution to exist today, what does the high bandwidth memory itself bring to the table? HBM is DRAM-based but was built with low power consumption and ultra wide bus widths in mind. The idea was to target a “wide and slow” architecture, one that scales up with high amounts of bandwidth and where latency wasn’t as big of a concern. (Interestingly, latency was improved in the design without intent.) The DRAM chips are stacked vertically, four high, with a logic die at the base. The DRAM die and logic die are connected to each other with through silicon vias, small holes drilled in the silicon that permit die to die communication at incredible speeds. Allyn taught us all about TSVs back in September of 2014 after a talk at IDF and if you are curious in how this magic happens, that story is worth reading.

The first iteration of HBM on the flagship AMD Radeon GPU will include four stacks of HBM, a total of 4GB of GPU memory. That should give us in the area of 500 GB/s of total bandwidth for the new AMD Fiji GPU; compare that to the R9 290X today at 320 GB/s and you’ll see a raw increase of around ~56%. Memory power efficiency improves at an even great rate: AMD claims that HBM will result in more than 35 GB/s of bandwidth per watt of power consumption by the memory system while GDDR5 only gets over 10 GB/s.

AMD has sold me on HBM for high end GPUs, I think that comes across in this story. I am excited to see what AMD has built around it and how this improves their competitive stance with NVIDIA. Don’t expect to see dramatic decreases in total power consumption with Fiji simply due to the move away from GDDR5, though every bit helps when you are trying to offer improved graphics performance per watt. How a 4GB limit to the memory system of a flagship card in 2015-2016 will pan out is still a question to be answered but the additional bandwidth it provides offers never before seen flexibility to the GPU and software developers.

And from Josh's recent Fiji GPU architectural overview:

AMD leveraged HBM to feed their latest monster GPU, but there is much more to it than memory bandwidth and more stream units.

HBM does require a new memory controller as compared to what was utilized with GDDR5.  There are 8 new memory controllers on Fiji that interface directly with the HBM modules.  These are supposedly more simple than what we have seen with GDDR5 due to not having to work at high frequencies.  There is also the logic chips at the base of the stacked modules and the less exotic interface needed to address those units as again compared to GDDR5.  The changes have resulted in higher bandwidth, lower latency, and lower power consumption as compared to previous units.  It also likely means a smaller amount of die space needed for these units.

View Full Size

Fiji also improves upon what we first saw in Tonga.  It can do as many theoretical primitives per clock (4) as Tonga, but AMD has improved the geometry engine so that the end result will be faster than what we have seen previously.  It will have a per clock advantage over Tonga, but we have yet to see how much.  It shares the 8 wide ACE (Asynchronous Compute Engine) that is very important in DX12 applications which can leverage them.  The ACE units can dispatch a large amount of instructions that can be of multiple types and further leverage the parallelization of a GPU in that software environment.

The chips features 4 shader engines each with its own geometry processor (each processor improved from Tonga).  Each shader engine features 16 compute units.  Each CU again holds 4 x 16 vector units plus a single scalar unit.  AMD categorizes this as a 4096 stream unit processor.  The chip has the xDMA engine for bridgeless CrossFire, the TrueAudio engine for DSP accelerated 3D audio, and the latest VCE and UVD accelerators for video.  Currently the video decode engine supports up to H.265, but does not handle VP9… yet.

In terms of stream units it is around 1.5X that of Hawaii.  The expectation off the bat would be that the Fiji GPU will consume 1.5X the power of Hawaii.  This, happily for consumers, is not the case.  Tonga improved on power efficiency to a small degree with the GCN architecture, but it did not come close to matching what NVIDIA did with their Maxwell architecture.  With Fiji it seems like AMD is very close to approaching Maxwell.

View Full Size

Fiji includes improved clock gating capabilities as compared to Tonga.  This allows areas not in use to go to a near zero energy state.  AMD also did some cross-pollination from their APU group with power flow.  Voltage adaptive operations only apply the necessary voltage that is needed to complete the work for a specific unit.  My guess is that there are hundreds, if not thousands, of individual sensors throughout the die that provide data to a central controller that handles voltage operations across the chip.  It also figures out workloads so that it doesn’t overvolt a particular unit more than it needs to to complete the work.

The chip can dispatch 64 pixels per clock.  This gets important for resolutions of 4K because those pixels need to be painted somehow.  The chip includes 2 MB of L2 cache, which is double of the previous Hawaii.  This goes back to the memory subsystem and 4 GB of memory.  A larger L2 cache is extremely important for consistently accessed data for the compute units.  It also helps tremendously in GPGPU applications.

Fiji is certainly an iteration of the previous GCN architecture.  It does not add a tremendous amount of features to the line, but what it does add is quite important.  HBM is the big story as well as the increased power efficiency of the chip.  Combined this allows a nearly 600 sq mm chip with 4GB of HBM memory to exist at a 275 watt TDP that exceeds that of the NVIDIA Titan X by around 25 watts.

Now that you are educated on the primary changes brought forth by the Fiji architecture itself, let's look at the Fury X implementation.

AMD Radeon R9 Fury X Specifications

AMD has already announced that the flagship Radeon R9 Fury X is going to have some siblings in the not-too-distant future. That includes the R9 Fury (non-X) that partners will sell with air cooling as well as a dual-GPU variant that will surely be called the AMD Fury X2. But for today, the Fury X stands alone and has a very specific target market.

  R9 Fury X GTX 980 Ti TITAN X GTX 980 TITAN Black R9 290X
GPU Fiji GM200 GM200 GM204 GK110 Hawaii XT
GPU Cores 4096 2816 3072 2048 2880 2816
Rated Clock 1050 MHz 1000 MHz 1000 MHz 1126 MHz 889 MHz 1000 MHz
Texture Units 256 176 192 128 240 176
ROP Units 64 96 96 64 48 64
Memory 4GB 6GB 12GB 4GB 6GB 4GB
Memory Clock 500 MHz 7000 MHz 7000 MHz 7000 MHz 7000 MHz 5000 MHz
Memory Interface 4096-bit (HBM) 384-bit 384-bit 256-bit 384-bit 512-bit
Memory Bandwidth 512 GB/s 336 GB/s 336 GB/s 224 GB/s 336 GB/s 320 GB/s
TDP 275 watts 250 watts 250 watts 165 watts 250 watts 290 watts
Peak Compute 8.60 TFLOPS 5.63 TFLOPS 6.14 TFLOPS 4.61 TFLOPS 5.1 TFLOPS 5.63 TFLOPS
Transistor Count 8.9B 8.0B 8.0B 5.2B 7.1B 6.2B
Process Tech 28nm 28nm 28nm 28nm 28nm 28nm
MSRP (current) $649 $649 $999 $499 $999 $329

The most impressive specification that comes our way is the stream processor count, sitting at 4,096 for the Fury X, an increase of 45% when compared to the Hawaii GPU used in the R9 290X. Clock speeds didn't decrease either to get to this implementation which means that gaming performance has the chance to be substantially improved with Fiji. Peak compute capability jumps from 5.63 TFLOPS to an amazing 8.6 TFLOPS with Fiji, easily outpacing even the NVIDIA GeForce GTX Titan X rated at 6.14 TFLOPS.

Texture units also increased by the same 45% amount but there is a question on the ROP count. With only 64 render back ends present on Fiji, the same amount as the Hawaii XT GPU used on the R9 290X, the GPUs capability for final blending might be in question. It's possible that AMD feels that the ROP performance of Hawaii was overkill for the pixel processing capability it provided and thus thought the proper balance was found in preserving the 64 ROPs count on Fiji. I think we'll find some answers in our benchmarking and testing going forward.

With 4GB on board, a limitation of the current generation of HBM, the AMD Fury X stands against the GTX 980 Ti with 6GB and the Titan X with 12GB. Heck, even the new Radeon R9 390X and 390 ship with 8GB of memory. That presents another potential problem for AMD's Fiji GPU: will the memory bandwidth and driver improvements made be enough to counter the smaller frame buffer size of Fury X compared to its competitors? AMD is well aware of this but believes that a combination of the faster memory interface and "tuning every game" to ensure that the 4GB memory limit will prevent the bottleneck. AMD noted that the GPU driver is what is responsible for memory allocation and technologies like memory compression and caching can drastically impact memory footprints.

While I agree that the HBM implementation should help things, I don't think it's automatic; GDDR5 and HBM don't differ by that much in net bandwidth or latency. And while tuning for each game will definitely be important, that puts a lot of pressure on AMD's driver and developer relations teams to get things right on day one of every game's release.

View Full Size

At 512 GB/s, the AMD Fury X exceeds the available bandwidth of the GTX 980 Ti by 52%, even with a rated memory clock speed of just 500 MHz. That added memory performance should allow AMD to be more flexibile with memory allocation, but drivers will definitely have to be Fiji-aware to change how it brings in data to the system.

Fury X's TDP of just 275 watts, 15 watts lower than the Radeon R9 290X, says a lot for the improvement in efficiency that Fiji offers over Hawaii. However, the GTX 980 Ti still runs at a lower 250 watts; I'll be curious to see how this is reflected in our power testing later.

Just as we have seen with NVIDIA's Maxwell design, the 28nm process is being stretched to its limits with Fiji. A chip with 8.9 billion transistors is no small feat, running past the GM200 by nearly a billion (and even that was astonishing when it launched).

Video News

June 24, 2015 | 08:22 AM - Posted by Anonymous (not verified)

Why compare with 980ti if it can't keep up? Stupid marketing spoiled it otherwise to be an excellent product with massive improvements. They should've sold this for 600$. And no one could've complained.

June 24, 2015 | 09:05 AM - Posted by Anonymous (not verified)

Tom's hardware review came out a little positive, better power figures too. But expected a little more from them. Hope they can come up with a driver with 5% improvement flat.

June 24, 2015 | 09:07 AM - Posted by Leyaena (not verified)

Why they compare it against the 980TI is pretty clear to me:
It comes in at the exact same price point.

I honestly feel bad for AMD, I had really hoped this would be their big break. But as it stands, the 980TI and especially Titan X still reign supreme.

June 24, 2015 | 10:21 AM - Posted by Eric (not verified)

I think he was saying 'why did AMD line it up with the 980 Ti' - that's why he said they should have priced it cheaper and then people would be saying 'yeah, it's not quite as fast, but it doesn't cost quite as much either'.

June 24, 2015 | 09:14 PM - Posted by Gary Lee (not verified)

If they had put the price point at $500-550, AMD would have sold scads more of them; and would have given legendary performance for a gamer's buck

June 24, 2015 | 09:58 PM - Posted by Anonymous (not verified)

your 980Ti is destroyed by Fury X

June 25, 2015 | 12:37 AM - Posted by Mandrake

Are you seriously expecting people to take AMD's own benchmark slides as proof? Every single major review from reputable sites like PCPer, HardOCP, TechReport etc have all shown the 980 Ti is a little ahead of the Fury X.

Ryan's review was generous to AMD. A silver award. The Fury X isn't a bad card at all, it just isn't as good as the 980 Ti. It makes AMD far more competitive than they have been for a long while now. Given that they are using the HBM technology, and the water cooling though - I did hope for at least parity with the 980 Ti.

June 25, 2015 | 07:43 AM - Posted by Anonymous (not verified)

Idiot, Fury X got beaten left right and center. Even overclock vs. overclock

Any enthusiast paying 650 for a high end card will overclock it like mad and this just shows how piss poor the Fury X performs. Heck even a well overclocked 980 is comparable to it lmfao XD

June 25, 2015 | 09:33 AM - Posted by Morphius (not verified)

Exatcly. "Good enough computing" is getting pretty boring, with reviewers and consumers seemingly only careing about making computational components less and less powerfull.

Guess what, take a sledgehammer to a 290X and it will drop the power consumption by 100%! I really don't understand why they seem to not understand this.

So much money gets wasted on making lower performing components these days...
I can only dream of what a 1~2kW liquid cooled graphics subsystem would be able to achieve when pared with a 250~500Hz display.
But I digress, the human eye cannot even see more than 24 (or was it 12) frames per second (or minute?) so no one really cares either way. Sigh.

June 25, 2015 | 10:06 AM - Posted by Squall Loire (not verified)

"But I digress, the human eye cannot even see more than 24 (or was it 12) frames per second (or minute?) so no one really cares either way. Sigh."

Common misconeption. The human eye requires 24 frames per second to blend it into motion - but is capable of seeing much, much more than that. Tests have been performed that showed pilots were able to see and identify images of aircraft displayed for just 1/2000 of a second.
The only limiting factor to framerates with current tech is display refresh rates. No point going over 60fps if your screen only does 60Hz.

June 25, 2015 | 10:48 AM - Posted by Morphius (not verified)

Sadly I am well aware of this piece of actual scientific information. I happen to fall into a miniscule minority who are not able to preceive smooth motion even from today's most advanced 144Hz, G-Sync display (I bought one hoping it would as my CRT died a couple of years back).
At 185Hz the CRT was still extremely flickery, but at least I was able to aim on target...

My comment was merely targeted at explaining why we are not seeing any real push for increased frame rates (or decreased latencies) beyond what was already available at the turn of the century. Untrue or not, what the masses believe automatically translates into an economic (dis)incentive.
The massive investment required in developing a high-power semiconductor process is simply insurmountable; therefore the single-threaded performance of a ~5GHz Sandy-Bridge caps the ultimate performance of most games at somewhere between 100~200fps, irrespective of GPU-power or screen resolution, for at least the next 10-15 years.

June 30, 2015 | 06:29 AM - Posted by Anonymous (not verified)

It's like 72 frames per second.... after that the human I can't really tell the difference.

June 24, 2015 | 01:22 PM - Posted by StephanS

Its compared to a 980 ti because it within 10% performance range.
The stock Fury is 14% faster then the 980 ti in Metro, this tells you how much shader compute this monster got to offer VS the GTX 980 ti.

Even so the Fury X is often faster then the GTX 980 ti, I agree with your $600 price. At $599 AMD would have cleanup the high-end gaming market. At $650, their is a barrier since nvidia carry a brand name value over AMD. Sadly, we known many people are ready to spent 2x the money for something barely faster because its nvidia.

anyways, I hope AMD sells out so they can continue to advance GPU forward. Because if nvidia is the sole GPU provider for gaming, its not going to be good for any of us.

June 24, 2015 | 09:55 PM - Posted by Anonymous (not verified)

HERE IS BENCHMARK: 980Ti and Fury X x2 :D and winner issssss:

FURY X X2 ofcuorse....
and this two card have the same price!!! but,Fury X is become cheaper (and is better) and Nvidia will not change price... beacuse,this AMD series is destroyed it!!!

June 28, 2015 | 11:18 AM - Posted by Rob G (not verified)

These aren't real benchmarks, these were "leaked" official benchmarks released by AMD where they tweaked in game settings to favor the Fury X. I can tweak in game settings and oc my 970 gtx to make it was faster than an r9 390x too.

August 24, 2015 | 06:32 PM - Posted by memester (not verified)

If you are doing non gaming workloads the r9 fury x can be the better gpu by far, with the much higher computer performance.

June 24, 2015 | 08:24 AM - Posted by Anonymous (not verified)

AMD's Richard Huddy Q&A questions about HDMI2,

Active DisplayPort 1.2a-to-HDMI 2.0 adapters are set to debut this summer. These adapters will enable any graphics cards with DP1.2a outputs to deliver 4K@60Hz gaming on UHD televisions that support HDMI 2.0.

June 24, 2015 | 08:26 AM - Posted by Ryan Shrout

I've used these types of adapters for 5-6 years and I can say that they never quite work 100% like they are supposed to. I don't consider this a good excuse for not having HDMI 2.0.

June 24, 2015 | 08:38 AM - Posted by Anonymous (not verified)

no you do not " used these types of adapters for 5-6 years "
adapters 5-6 years was DVI TO HDMI Passive !

in the winter Active adapters DP1.2A TO HDMI 2.0
Active = DP1.2 300 MHZ TO HDMI 2.0 600 MHZ

completely different product

June 24, 2015 | 09:09 AM - Posted by Ryan Shrout

I understand that the HDMI 2.0 part is going to be new. But I have definitely been using DP to DL-DVI adapters 5-6 years, despite what your petulant comment indicated. Things like this have been around a LONG TIME:

It's the same fundamental technology to convert between two digital signals.

June 24, 2015 | 10:58 AM - Posted by Mobile_Dom

you dont see the word petulant as often as we should anymore

June 25, 2015 | 10:09 AM - Posted by Squall Loire (not verified)

You just said yourself that you've been using DP to DVI adapters, and not the DP to HDMI adapters the OP was referring to...

June 25, 2015 | 07:06 PM - Posted by Wolvenmoon (not verified)

HDMI and DVI use the same signaling over different physical jacks.

June 25, 2015 | 09:36 PM - Posted by Anonymous (not verified)

Agreed, I've been patiently waiting for this adapter for quite some time now. Would like to know which retailer was selling them for the last 5-6 years because I know for sure it wasn't newegg or amazon.

June 24, 2015 | 08:57 AM - Posted by Genova84

I've been hearing about these rumored adapters for over a year now. People have posted their emails with both accell and bizlink and neither company is saying these are definitely launching this year. They say, "maybe by year end." I don't know why AMD wouldn't want to control this basic piece of hardware. I already has an HDMI port FFS! Why not just make that HDMI 2.0? I, I don't get it.

June 24, 2015 | 09:18 AM - Posted by Anonymous (not verified)

because there is no such product in the world !

the problem make DP1.2A Low voltage 300 MHZ TO HI voltage 600 MHZ
and the driver should have all protocols of the HDMI 2.0 standard
is very complicated

MSI is saying their 380, 390 and 390X all have HDMI 2.0

AMD admits at HDMI 1.4 bandwidth

AMD have hdmi 2.0 today NVIDIA Style
the only difference AMD admits it ( HDMI 1.4 bandwidth 10.2 )


HDCP 2.2 ?
DCI - P3 coloer ?
SiI9777 Chip ?
bandwidth 18GBPS ?
HDCP 2.2 ?

reviews without checking the truth no reviews!

June 24, 2015 | 09:26 AM - Posted by Ryan Shrout

You might be our most persistent commenter to date. ;)

June 24, 2015 | 09:51 AM - Posted by TOMI (not verified)


June 24, 2015 | 09:52 AM - Posted by Jabbadap (not verified)

Nah maxwell2.0:s integrated dislpay controller can do 600MHz pixel clock and have full yuv4:4:4/rgp4:4:4 color at uhd over hdmi2.0. And it does have hdcp2.2 link.

More problematic is the chip on the telly side, whose in early days(and even today) really were not capable for hdmi2.0(they just where modified hdmi1.4). Thus in the time gtx980/gtx970 were released, there were tellies marketed as "hmdi2 4k" which in reality were only capable yuv4:2:0 at 2160p@60Hz(which even keplers can do in hdmi1.4 via drivers).

June 24, 2015 | 10:23 AM - Posted by BlackDove (not verified)

What monitor do you have thats DCI-P3? And you do realize that Rec.2020 monitors will be much better, and Rec.2020 will be a consumer and pro color standard right? What content do you even need P3 over that cable for? Shouldnt you have a Quadro with SDI outputs?

June 24, 2015 | 12:34 PM - Posted by Allyn Malventano

Dude when are you going to understand that Maxwell *does* support HDMI 2 with either 4:4:4 or full RGB 8 or 10 bit color at 4K60.

June 24, 2015 | 12:48 PM - Posted by Anonymous (not verified)

Don't think you can support something higher then the standard.

4k @ 60hz
10bit 4:2:0

4k @ 30hz
8bit 4:4:4

HDMI 2.0 cant do 4:4:4 @ 60hz or support Rec 2020 @ 60hz. You have to use DisplayPort 1.2 or higher for that.

June 25, 2015 | 07:18 AM - Posted by PCPerversion

Apart from the fact that Ryan Shrout and Allyn Malventano clearly chug Nvidia cock,this isnt such a bad review.

June 24, 2015 | 09:42 PM - Posted by Anonymous (not verified)

exactly!!! Nvidia DONT HAVE HALF features witch have AMD!!! Nvidia fanboys ONLY knows says: Nvidia have lower TDP :((( WOOOW.loer TDP!!!! fanboys,BUY yorself GOOD PSU!!! if you are gamers. every true gamer have great PSU. and most inportant,have Nvidia maybe VSR???? if you fanboys know what is this... no,dont have! beacuse,Nvidia is SHIT!!! can not more steal and now do terrible cards... drivers especially bad. they do not have a good driver for 9xx series! even do not know how much they have VRAM on cards.. so... Nvidia is simple big shit!!!!

June 25, 2015 | 12:41 AM - Posted by Mandrake

VSR is essentially a blanket copy of DSR. Oops. DSR works on everything back to Fermi.

July 2, 2015 | 03:29 PM - Posted by Anonymous (not verified)

Every tru gamer speak english good

June 24, 2015 | 12:37 PM - Posted by Anonymous (not verified)

Somehow I don't think the 20 people in the world with 4k 60Hz HDMI 2.0 televisions who are looking at buying a high-end graphics card for a gaming HTPC are the market AMD is shooting for with this card. By the time 4k 60Hz televisions become so widespread that HDMI 2.0 is a requirement, the original Fury is going to be long obsolete.

June 24, 2015 | 12:49 PM - Posted by Martin Trautvetter

You should shop around for a new TV, HDMI 2.0 / 4k60 is everywhere.

Which makes AMD's omission of HDMI 2.0 so baffling.

June 24, 2015 | 01:19 PM - Posted by trenter (not verified)

Ryan, please consider adding more games to your testing suite. I think amd will come out looking a lot better in some of the newer games like shadow of mordor, far cry 4, and possibly upcoming games not developed for last gen console hardware. Good review though, the power figures were surprising.

June 24, 2015 | 08:30 AM - Posted by Martin Trautvetter

And I'm sure Mr Huddy will hand-deliver one of those to every new owner of a Fury X, right?

June 24, 2015 | 09:14 AM - Posted by Gunbuster

Indeed, the classic AMD Coming Soon™ with a splash of not our problem.

It's hard enough to find a 4k TV that even does HDMI 2.0 4k 60Hz 4:4:4 correctly. You don't want to be throwing a janky active converter into the middle of that.

June 24, 2015 | 10:25 AM - Posted by BlackDove (not verified)

Theyre all a complete waste since NONE of the current 4K TVs support the REAL 4K standard, which is Rec.2020.

June 24, 2015 | 12:48 PM - Posted by Anonymous (not verified)

"It's hard enough to find a 4k TV that even does HDMI 2.0 4k 60Hz 4:4:4 correctly."

This is pretty much why it's no big deal that Fury X doesn't have HDMI 2.0 - nobody needs it. By the time 4k 60Hz televisions are widespread enough that HDMI 2.0 is a requirement, the Fury X will be obsolete anyway.

June 24, 2015 | 08:26 AM - Posted by Martin Trautvetter

Too bad about the pump noise.

June 24, 2015 | 09:26 AM - Posted by obababoy

The other websites said this is a non issue once you change the speed profile a bit.

June 24, 2015 | 10:33 AM - Posted by Ryan Shrout

Maybe someone did, but TR, HWC and Guru3D all make comments on the pump noise.

June 24, 2015 | 11:38 AM - Posted by Martin Trautvetter

AFAIK you can't change the speed of the pump. Ryan?

btw: report a retail sample Fury X to have toned down, but still existing pump noise when compared to their press sample.

June 24, 2015 | 05:17 PM - Posted by djotter

I have the same pump noise on my CoolerMaster Nepton 140XL. I turned down the pump to reduce the whine, but it is still audible outside my case.

June 24, 2015 | 08:35 AM - Posted by Tedders (not verified)

"AMD told me this week that the driver would have to be tuned "for each game". This means that AMD needs to dedicate itself to this cause if it wants Fury X and the Fury family to have a nice, long, successful lifespan."

This scares me.

June 24, 2015 | 08:40 AM - Posted by Cataclysm_ZA

Don't be afraid. That's exactly the reason why a HD7970 GHz is still a decent performer today and still gets game improvements added to it. Same thing with R9 290X, which actually improved drastically over time to where it beats GTX 980 in quite a few games. AMD does need to work on drivers and apply the same tesselation fix to Fiji that they did with Hawaii, but overall I can see this card becoming more competitive over time.

June 24, 2015 | 01:30 PM - Posted by StephanS

I actually cant wait to see Dx12 games on a 290x VS GTX 980

From Mordor & Metro, we see that the 290x got more hoomph then the GTX 980 at high resolution, but get clobbered at 1080p.

If Dx12 driver are on parity on both platform, the numbers might hold across resolution, and we might see that the 290x is always faster then the GTX 980...

But then again, Battelfield4 Mantle seem broken, so maybe AMD wont be able to capitalize on Dx12 ?

June 24, 2015 | 07:51 PM - Posted by Titan_V (not verified)

Mantle in Battlefield 4 most definitely NOT broken. I use it every day. Works great!

June 24, 2015 | 12:39 PM - Posted by Barfly (not verified)

So they make a card that took years to develop and they could not release it with a decent driver on day one?
this is like game devs releasing PC ports .
So sick of this stuff

June 24, 2015 | 08:45 AM - Posted by Cataclysm_ZA

Ryan, any word from AMD on why tesselation performance for the Fury X is low-ish? Its barely better than Tonga (R9 285), let alone R9 290X.

June 24, 2015 | 08:53 AM - Posted by Martin Trautvetter

Can't cheat fast enough? :p

June 24, 2015 | 08:58 AM - Posted by Cataclysm_ZA

I'm not sure what you're referring to here.

June 24, 2015 | 12:44 PM - Posted by Martin Trautvetter

Just making light of AMD 'finding' massive amounts of extra tessellation performance in one- and two-year-old GPUs.

June 24, 2015 | 09:11 AM - Posted by Ryan Shrout

The answer as to why is just because it doesn't integrate any more tessellation hardware than Tonga did. Same with the ROP count.

June 24, 2015 | 09:17 AM - Posted by Cataclysm_ZA

Ah! Well, its not a bad result and still a pretty good deal even at $650. I hope AMD's driver team makes this last just as long and improve just as much as the HD7970 over time.

Techreport's review theorised that the number of ROP units weren't what was holding back Hawaii. Is that your view as well?

June 24, 2015 | 05:21 PM - Posted by trenter (not verified)

Is it not using the tonga improvements? Techreport claims tonga has improved tesselation over hawaii substantially when they reviewed the 285.

June 25, 2015 | 12:04 PM - Posted by Cataclysm_ZA

It did indeed, and Tonga is much, much faster than any Radeon before it when it comes to tesselation. But, the GTX 780 is still faster in tesselation benches than Fiji, which obviously plays out when testing performance in titles that have good Hairworks implementations.

June 24, 2015 | 08:45 AM - Posted by General Lee (not verified)

2% faster than 290X in GTA V at 1440p. How's that even possible...

$600 would've been a more fitting price. Sad to see AMD is still not able to create a faster GPU than Nvidia. They could really use a halo product, but Fury X doesn't really impress. I guess they could have much to gain with drivers, but going with a 980 Ti is probably the safer bet overall.

June 24, 2015 | 09:11 AM - Posted by Ryan Shrout

I agree that $600 would feel a lot better for this, and I asked AMD for a price drop yesterday, but they didn't listen to me. :)

In general though I think they are not going to have issues selling through their stock of the Fury X for several months.

June 24, 2015 | 11:02 AM - Posted by Rustknuckle (not verified)

Think its going to be awhile before you see a price drop since they have already dropped down from the 800$ they were planning to charge for Fury before 980ti came out.

June 24, 2015 | 12:33 PM - Posted by Azix (not verified)

How did you know they were at $800?

June 24, 2015 | 03:37 PM - Posted by arbiter

That was rumored price for fury X months leading up to launch 850$, 980ti was rumored at 650$.

June 24, 2015 | 12:33 PM - Posted by Azix (not verified)

How did you know they were at $800?

June 24, 2015 | 09:00 AM - Posted by Anonymous (not verified)

So much for getting more dGPU market share AMD....

June 24, 2015 | 09:00 AM - Posted by Dark_wizzie (not verified)

Hey Ryan, why do you guys still use Unigine Heaven> Isn't Unigine Valley a newer benchmark?

June 24, 2015 | 09:12 AM - Posted by Ryan Shrout

Meh, not that big of a difference. Unigine Heaven is just a standardized synthetic test with an emphasis on tessellation.

June 24, 2015 | 10:20 AM - Posted by Dark_wizzie

The benchmark is x10 more satisfying to watch though. :')

June 24, 2015 | 09:04 AM - Posted by Anonymous (not verified)

All the AMD fanboys on various tech sites hyping and telling how this will destroy Titan X/ 980 Ti, what now huh?
Fury X got destroyed even with HBM and still using more power.

June 24, 2015 | 09:18 AM - Posted by Gunbuster

I wold not say "destroyed" but coming months late to the party, only being able to hang at +-5% performance in most cases and still using more power is a pretty poor showing.

Thanks for beta testing HBM though AMD, I'm sure I'll enjoy it in an Nvidia card down the road ;)

June 24, 2015 | 03:41 PM - Posted by arbiter

Yea i am one those people that always gets attacked for a lot of things i say despite fact of being true. There is things i am wrong about no ones perfect but i was pretty much spot on with this card not being much faster if any then 980ti.

I was wrong about power draw but it was possible to be that range cause amd's history but meh.

June 24, 2015 | 05:24 PM - Posted by trenter (not verified)

Go read other reviews with more games tested, fury is 5% faster than 980 ti at 4k.

June 24, 2015 | 09:10 AM - Posted by Snake Pliskin (not verified)

This review sucks.

June 24, 2015 | 09:27 AM - Posted by Ryan Shrout


June 24, 2015 | 10:00 AM - Posted by Dark_wizzie (not verified)

I love your reviews.

psst: Tell Allyn I want traced based analysis for gaming on an SSD. :)

June 24, 2015 | 10:34 AM - Posted by Ryan Shrout

I keep telling him too!

June 24, 2015 | 11:06 AM - Posted by Dark_wizzie


June 24, 2015 | 12:37 PM - Posted by Allyn Malventano

For gaming? Hmm, interesting idea, but the trace would be mostly a line riding 0 (for a game at least). I am cooking up some new stuff though - to be rolled out in a big enterprise review first and consumer parts later.

June 25, 2015 | 07:20 AM - Posted by PCPerversion

Apart from the fact that Ryan Shrout and Allyn Malventano clearly chug Nvidia cock,this isnt such a bad review.

June 24, 2015 | 10:44 AM - Posted by John H (not verified)

Clearly this entire article was stolen from JoshTekk

June 24, 2015 | 01:49 PM - Posted by Josh Walrath


June 24, 2015 | 09:12 AM - Posted by Anonymous (not verified)

Here choose your pickings, read it and weep:,1.html

I'm sure more will follow with the exact same figures and conclusions LOL

June 24, 2015 | 09:12 AM - Posted by Keven Harvey (not verified)

I wonder if it will pick up some steam on july 29th. Surely less driver overhead can only help.

June 24, 2015 | 09:27 AM - Posted by Ryan Shrout

The release of Windows 10 and DX12 will not suddenly make every game built on a DX12 engine.

June 24, 2015 | 09:32 AM - Posted by Boggins (not verified)

Like in every new DX generation, it always takes several years after release before games truly switch over as a new standard. Let's hope Microsoft's offer to roll Windows 10 to Windows 7 and 8/8.1 users will speed adoption up, to give developers a reason to go to DX 12 earlier.

But by the time DX 12 is standard in games, we'll be on to the next big GPU generation.

June 24, 2015 | 03:43 PM - Posted by arbiter

The graphic improvements will surely take some time but i expect the speed improvements DX12 offers will be adopted pretty quick since it will open up things to make games look and run better.

June 24, 2015 | 05:34 PM - Posted by trenter (not verified)

The difference is xbox one has dx12 and devs have been using it for a while now. Windoes 10 is free allowing a much larger install base. If people buy gpu's with the intention of keeping them for 2-3 years then dx12 games will be shipping in that periof of time.

June 24, 2015 | 09:43 AM - Posted by Anonymous (not verified)

It indeed does not, but testing Win10 right now, together with the Win10 beta drivers already give us better performance on DX11 games.

It seems that either Win10 is more efficient, using DX12 magically improves DX11 performance or that the beta drivers for Win10 are actually better than the current beta drivers for Win8.1

June 24, 2015 | 10:34 AM - Posted by Ryan Shrout

Source on this claim?

June 24, 2015 | 11:03 AM - Posted by General Lee (not verified)

There's been plenty of claims that AMD's drivers get improved DX11 CPU utilization on Win 10. Of course this is just some people on the forums, I haven't heard of any HW site to make proper tests yet.

Here's an example someone has made for Project Cars:

June 24, 2015 | 11:13 AM - Posted by Anonymous (not verified)

Don't forget about the Vulkan graphics API, and some Steam OS/Linux testing as the titles become available for the Steam Box/Steam OS based systems. Also some DX12 multi-adaptor testing, and I'm not sure about any multi-adaptor type support in Vulkan, but there should be. There are a lot of newer graphics API technologies that will have to be tested on the latest cards, and back tested on and older generation cards that may be able to take advantage of the newer graphics API technologies to a lesser degree than the newest cards, but there still may be improvements.

June 25, 2015 | 03:42 PM - Posted by Keven Harvey (not verified)

The game, no, but the driver maybe. Your own 3dmark api testing has shown that amd gets no benefit from going from dx11 single threaded to dx11 multi threaded. Maybe it can somehow free up that main thread that the game is using.

June 24, 2015 | 09:23 AM - Posted by onion uk (not verified)

im kinda getting the feeling the drivers are playing a sizable roll in the weaker performance

June 24, 2015 | 09:26 AM - Posted by Gunbuster

Of course, better drivers Coming Soon™ from the software experts at AMD...

June 24, 2015 | 09:25 AM - Posted by Anonymous (not verified)

What a joke, AMD

June 24, 2015 | 11:24 AM - Posted by Anonymous (not verified)

What high prices(higher than Too High in Nvidia case) you Nvidia fans would have to pay, if AMD was not at least within the margin of error with their competing products, and every driver needs tweaking no matter the GPU maker, especially for the latest hardware. AMD is not doing too shabbily with only 4GB of memory, and as soon as HBM2 is available it will be on AMD's updated Fiji cards, with very little extra engineering needed. Nvidia will still be in its testing and certification phases with HBM, while AMD will have almost drop in capability to accept the newer HBM standard.

June 24, 2015 | 09:28 AM - Posted by Boggins (not verified)

I guess 980 TI ended up far faster than AMD expected. They probably wanted to pit Fury X against Titan X, and Fury against 980/980 TI. Unfortunately, at the same price point as 980 TI, there doesn't seem to be much reason for anyone to consider buying the AMD Fury X right now. Especially since it doesn't sound the drivers are fully optimized, availability problems. It's unfortunate, but it also sounds like Nvidia has priced 980 TI aggressively enough that they won't need to make a price drop.

As much as I commend AMD's different approach with this new card, I am also disappointed that they weren't able achieve better performance to shake up Nvidia's share of the market.

June 24, 2015 | 09:48 PM - Posted by Anonymous (not verified)

you stupid noob... Fury X eat GTX 980Ti!!! here is benchmark.

and GTX 980Ti and Fury X have the same PRICE!!! get informed before you start to write nonsense... and most powerfull card/s on the world is Fury X x2 and R9295X2!!! all Nvidia cards is not even close. this is reall power! AMD...Nvidia is always be a SHIT!!! and thief!

June 24, 2015 | 09:28 AM - Posted by obababoy


Did you guys use the newer 15.15 driver? I am assuming so.

June 24, 2015 | 09:41 AM - Posted by Ryan Shrout

Yes, that's listed on our Test Setup page.

June 24, 2015 | 01:21 PM - Posted by Anonymous (not verified)

15.6 released a day before you posted this review.

June 24, 2015 | 01:57 PM - Posted by Josh Walrath

I believe 15.15 is based on a newer rev of Catalyst as compared to 15.6 (which features Batman optimizations).

June 24, 2015 | 09:39 AM - Posted by justin150 (not verified)

Love the look of the card (shame watercooling tubes not on side of card and power connections at back rather than other way around), love the size but...

Just not good enough, except for mini-itx builds

Needs to drop price and I will wait for version with custom water block and no pump.

Knock $100 off price and they have a winner, but head to head with 980ti is a bit disappointing

June 24, 2015 | 09:40 AM - Posted by Anonymous (not verified)

All these cards including Titan X are not 4K PC Gaming period. Veteran PC Gamers don't play games at console framerates FFS. It's either min. 60fps or GTFO

Older games you can but the really old games.....

1440p is where its at now and 980Ti and Titan X can do that best with higher overclocks out of the box and lower power consumption. Not forgetting the multi-monitor & 21:9 guys either.

June 24, 2015 | 09:42 AM - Posted by Ryan Shrout

I actually agree with your here. Every card that comes that says it is "perfect for 4K" is being disingenuous. And I think both NVIDIA and AMD are guilty of it.

June 24, 2015 | 11:20 AM - Posted by Anonymous (not verified)

yep I have a 980ti hybrid. its perfect for maxed out 1440P Vsync on. its stutter free.

June 24, 2015 | 09:51 PM - Posted by Anonymous (not verified)

980Ti and Titan Z can hide in front Fury X x2!!! Fury X x2 and R9 295X2 is most powerfull GPU on the world!!! and is not loud like 980Ti and Titan.. she sounds like trackor...

June 24, 2015 | 09:42 AM - Posted by J Nev (not verified)

The bottom line: should have come out 6 months ago.

June 24, 2015 | 09:46 AM - Posted by TheAnonymousJuan (not verified)

Was really hoping for something better from AMD. Oh well, I just ordered up my 980 ti from Newegg after reading this and a few other reviews.

June 24, 2015 | 09:47 AM - Posted by xfsdfg (not verified)

For god sakes, some games barely play 1080 in max settings....where is 1080 benchmark again. There is absulutely no point to just run after bigger display with every new graphicscard when solid 60 fps cant be quaranteed with every game in max.

June 24, 2015 | 09:49 AM - Posted by obababoy

Tell us about 1080 60fps 13 or 14 more times and I swear I will flip out!

June 24, 2015 | 09:50 AM - Posted by J Nev (not verified)

Excuse me Ryan: I hope you do a 390x review soon?

June 24, 2015 | 10:35 AM - Posted by Ryan Shrout

AMD has not seemed eager to send these out for review. But I'll get my hands on one!

June 24, 2015 | 03:44 PM - Posted by arbiter

390x is an overclocked 290x so performance of those cards is only about 10% higher.

June 24, 2015 | 09:53 AM - Posted by Bri (not verified)

Now that the reviews are out and it is indeed so very close to the 980 Ti, it does make me stop and think. Did Nvidia engage in some corporate espionage here? Why else would the company introduce a part that undercuts their own higher priced Titan unless they were already expecting the Fury and attempting to dampen the reaction...

June 24, 2015 | 09:57 AM - Posted by obababoy

Unlikely :)Nvidia has just been over charging for the Titan X and had enough wiggle room to undercut it with a $650 almost equal version. Besides AMD set the price based on the 980ti.

It compares pretty well. My issue with AMD is that on paper this card should be beating the 980ti...Something fishy with the drivers has to be a factor. Right?

June 24, 2015 | 10:02 AM - Posted by Josh Walrath

They both use the same suppliers, have the same board partners, and many employees know one another.  There aren't a lot of secrets in the industry that stay secret for long.

June 24, 2015 | 10:38 AM - Posted by Ryan Shrout

I am quite sure that NVIDIA was doing some research to figure out what AMD might release. In actuality I think that NVIDIA would have aimed to run at just UNDER the performance of the Fury X at the same price if they'd had their choice, so I am guessing NVIDIA over estimated the perf that Fiji brought to the table.

Interesting discussion for the podcast maybe.

June 24, 2015 | 12:04 PM - Posted by Dark_wizzie

Linus totally called this one. He said that Nvidia knew the performance of the Fury X far in advance. The 980TI and 980 price drops were not accidental releases/price drops that happened to hurt the Fury X in all the right ways. Although, it's very likely that AMD knew what Nvidia was up to as well. Cards don't just magically fall into those price points. It was pre-planned. So people wondered what Nvidia will do with the prices of 980ti now the Fury is out, the answer I believe is nothing: The release of the 980ti is already the response.

June 24, 2015 | 01:03 PM - Posted by Martin Trautvetter

I'm sure AMD had a rather unhappy day when they found out about the 980 Ti's pricing. (much like they were probably quite upbeat after Titan X)

Current Fury X pricing seems to indicate that they're trying to ride it out for the moment, but with custom Tis coming to market, I can't imagine it sticking it 650 unless supply is severely constrained.

June 24, 2015 | 03:47 PM - Posted by arbiter

nvidia has such a large gap in market share they know they could take a lose in 1 card. They have win in just about every other card right now as most their main line up is A New chip not an old one renamed to look new.

June 24, 2015 | 03:26 PM - Posted by Anonymous (not verified)

I suspected this when the 980 Ti so closely matched the Titan X. I suspect the 980 Ti would have been cut down more significantly without the competition from Fury X. If the 980 Ti had been cut down more significantly, then Fury X would have been competitive with a $1000 Titan X, rather than a $650 980 Ti. Nvidia not only did not cut it down much, they also bumped up the clock compared to the Titan X. If Nvidia can supply the demand for the 980 Ti, then it is acceptable; it seems to be in stock. I have to wonder if Nvidia will release another card though. It seems like they would have GPUs which have defects preventing them from being sold in 980 Ti cards, but would still be a good product.

June 24, 2015 | 09:55 AM - Posted by YTech


the new AMD product seems promising and still in it's early stages. There's room for improvements.

Comparing the specs with it's competitor, have you attempted to overclock both cards to see if there's any comparable improvements vs the competitor?

It appears that the memory frequency on the Fury X is pretty low. Not sure if that's a typo. They do have the bandwidth capability, but may be lacking on the frequency which can be noticed in some games.

June 24, 2015 | 04:12 PM - Posted by BillDStrong

The frequency is correct, they get their bandwidth from the 4096 memory bus. Remember the memory and the chip are much closer together, so when you raise the speed of one, it heats up both of the parts. They also probably don't have the third party drivers that would allow them to boost the voltage of the cards, which can limit the boost they can get.

June 24, 2015 | 10:48 PM - Posted by YTech

As I said, still an early product. Maybe they'll be third party drivers that can provide some improvements as you mention. But I agree in regards to the additional challenges due to short distance between units.

Ryan, went in more details during the podcast in regards to the overclocking.

June 24, 2015 | 09:59 AM - Posted by Anonymous (not verified)

I am surprised you didn't use Witcher 3 as a test game. It definitely is a good benchmark for newer cards.

June 24, 2015 | 10:02 AM - Posted by obababoy

Witcher 3 doesnt have an actual benchmark for it though.

June 24, 2015 | 10:13 AM - Posted by Dark_wizzie

What about benchmarking it by playing a section of the game? I thought that was the default way to benchmark.

June 24, 2015 | 10:22 AM - Posted by obababoy

But it isnt exact or repeatable. Especially with wandering AI and time of day etc etc. I get what you are saying but it wouldnt be accurate unless you did the same motion like 50 times for each GPU in roughly the same area and compared averages. In Witcher 3 just looking in a different direction sometimes changes my FPS by 10!

June 24, 2015 | 03:49 PM - Posted by arbiter

Its how they do it in crysis 3, there is a set save their load and run through to a certain spot. There is gonna be difference each time yes, but that is why you run the benchmark a few times and make an avg. Its about only way to do it.

June 24, 2015 | 10:24 AM - Posted by Ciddo (not verified)

It's a little disappointing seeing the Fury X lose marginally to the 980 Ti at the same price point. However, I think the real card we have to wait for is the Fury. If the Fury card that comes out later this year performs 10-15% with $100 off of its price point, that will probably be the card to get.

When Nvidia released the 980 and the 970, the 970 was an amazing buy up until the whole 3.5GB memory issue came to light. If AMD can avoid that with the release of the air cooled Fury card, consumers can probably take better advantage of the HBM with their own water cooling solutions.

June 24, 2015 | 10:26 AM - Posted by Dark_wizzie

There's an interesting difference between the Fraps FPS and Observed FPS for 295x2 on Skyrim @ 4k. o.o

June 24, 2015 | 10:38 AM - Posted by Ryan Shrout

Yeah, AMD never fixed DX9 frame pacing...

June 24, 2015 | 11:17 AM - Posted by Dark_wizzie

Yeah... That ruled out Crossfire for me entirely. Not working with Skyrim is unacceptable. I'm also worried about the 4gb vram for Skyrim as well, and it seems to be chugging a bit on GTA V at 4k. If there is a voltage unlock coming I hope we get it soon because as it stands the 980 ti will overclock better.

Some guys on OCN are pointing out some hot VRM temperatures on the back of the card. I dunno if it's a problem or not. (The backplate remains cool but under it the VRMs are supposed to be really hot. The plate isn't touching the VRM and there's just air in there being an insulation more than anything, or so it is claimed.)

June 24, 2015 | 01:07 PM - Posted by Anonymous (not verified)

they fixed the crossfire frame pacing on DX9 for lower resolution, just not for 4k/eyefinity I think

if you compare your tests from before they fixed anything (when FCAT was new) and now the DX9 1080P CF results were fixed, I think

June 24, 2015 | 10:33 AM - Posted by Searching4Sasquatch (not verified)

Free Canadian bacon with every Furty X!

June 24, 2015 | 11:11 AM - Posted by PCPerFan (not verified)

I'm super impressed with the improvements in power consumption, but that's the only thing I'm impressed with.

Performance trails the 980 ti - I don't know what AMD was thinking pricing this the same as the 980 ti as the underdog. No DVI, no HDMI 2.0, do not want.

If this was priced at $550, this would be a solid release.

June 24, 2015 | 11:16 AM - Posted by Dark_wizzie

Hopefully the Fury without the water cooling will have the same chip and improved driver optimization when it does launch to make it a more compelling option. Personally, I don't care about the HDMI support, but I'm running a Korean monitor and I need my DVI-D. This and some other things is pushing me towards the 980ti TBH.

June 24, 2015 | 03:51 PM - Posted by arbiter

Power draw is improved but a lot of that could be mostly due to water cooler. Can see how when you keep the gpu are a very cool temp lowers power draw cause less leaks in the 295x2. So I would guess some of the draw savings is due to that. We will know for sure when the non-water cooled one comes out.

June 24, 2015 | 11:15 AM - Posted by Anonymous (not verified)

Fury X got REKT!

June 24, 2015 | 11:29 AM - Posted by Anonymous (not verified)

Not quite rekt; it's no Bulldozer. More like the old comparison between the 290x and 780ti (which, interestingly enough, has shifted heavily in the 290x's favor with current drivers and games - I wonder if the same will happen with the Fury and 980ti).

They definitely hyped it too much, but it's really not a bad card by any means. The biggest surprise is how much higher the Fury is in FLOPS than the 980ti, yet delivers similar or slightly less performance. Drivers? Memory limitations? Tesselation?

June 24, 2015 | 11:57 AM - Posted by Anonymous (not verified)

And Nvidia is increasingly segmenting their gaming SKUs from their accelerator SKUs, at least with AMD some number crunching advantages can be had for around the same price point. I see where Nvidia in getting the power savings from by stripping out the Flops/FP capabilities! So AMD's product provides more computational performance, should the newer gaming engines need it for physics and other enhancements, AMDs GPU product with AMDs continued internal improvements with Mantle, and Mantel's improvements quickly provided downstream to Khronos and Vulkan(the public facing version of most of Mantle's and other's API contributions) will allow for the Fiji the same improvements over time.

Really the jury is still out on Fury X, and its derivatives until more complete testing on the newer Graphics APIs. And what about AMD's continued internal Mantle developments that will make their way into the software stacks of the gaming engines, and games, through sharing with M$ for DX12, and Khronos with Vulkan, and any special development sharing of Mantle with specific Games makers for their products. The gaming comparisons alone are not enough at this early stage to totally dismiss AMD's competing products, this is just a first match in a series, and hopefully the prices will get better on both sides, so the consumer wins.

June 24, 2015 | 03:57 PM - Posted by arbiter

If you look at 680/780ti/980 vs the competeing AMD part generally the GFLOPS has been in favor of AMD by a bit most the time. But a lot of games that doesn't matter a bit.

June 24, 2015 | 04:42 PM - Posted by Anonymous (not verified)

A bit, sure, but I don't think the gap was ever this big. Makes me more hopeful about future driver improvements since there's so much raw power there.

June 24, 2015 | 05:16 PM - Posted by Anonymous (not verified)

Games not as much, but other uses, and some non gaming graphics usage, that extra processing power comes in handy. With Blender 3d getting support for cycles rendering on AMD GPUs, things are about to change for low cost 3d graphics projects, especially with the costs of the professional GPU cards that most independent users can not readily afford. I'm looking at the future dual fury based SKUs, as well as what pricing that may happen around a Professional/HPC APU workstation variant that AMD has in the works based on the Zen CPU cores and Greenland graphics that shares HBM on a interposer, for an APU type workstation system on a interposer.

Nvidia's pricing is way beyond what Abdul Jabbar could reach with a rocket assisted skyhook for users that need the all those GFLOPS without the drained bank accounts.
It was Fury that brought on that lower pricing, and Fury does not have bit-coin mining to keep the costs high, in fact I see a relatively quick price drop on the Fury SKU, more so than with the previous generation.

A price war is about the look of where things are heading more so from AMD's side that needs to get a larger market share and the extra revenues that come with more market share at the expense of large profits. Large sales volumes(revenues) and more market share can make up for some lesser profit margins and produce a better economy of scale for AMD with its suppliers, those larger volumes of unit sales can get larger materials bulk savings from AMDs suppliers, and that will eventually bring down the costs of HBM in line with, and even better, than GDDR5.

June 24, 2015 | 11:22 AM - Posted by aurhinius (not verified)

Seems like bulldozer to excavator again. Hbm slapped on a larger but failing architecture based core. Doesnt matter how fast the memory is if you cant get the core right. Might expkain the use of the interposer. I dont think this is the card AMD intended to release but perhaps the 20nm failure forced the issue and placing hbm on an old core was the only choice.

The similarity to the cpu situation is uncanny I feel.

June 24, 2015 | 11:29 AM - Posted by Master Chen (not verified)

So much paid shill liar BS in this "review" article, that it's outright laughable. More than 80% of all most respected and most accurate hardware reviewing sources out there made clear reports that stock Fury X beats the living SHIT out of 980 Ti in 8 cases out of 10 (and even manages to beat Titanic X in 6 gaming tests out of 10 while not even being overclocked), losing noticeably to 980 Ti and Titanic X only in heavily Nvidia-biased and GayWorks-gimped titles.

PcPer is pretty much done for, at least for me personally. I prefer my resources to be accurate, unbiased and truthful to the very end. This here "article" has clearly shown to me that PcPer is a no-good source for hardware testing reviews AT THE VERY LEAST.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.