Review Index:
Feedback

Fable Legends Benchmark: DX12 Performance Testing Continues

Author: Ryan Shrout
Manufacturer: Lionhead Studios

Results - 4K, Ultra

On to bigger and better things - 4K testing!

View Full Size

Frame rates and scores drop quite a bit as we move from 1080p to 4K resolution testing, though the order of GPU performance remains the same. The GTX 980 Ti has a 9.4% lead over the Fury X though the R9 390X is 7.4% faster than the GTX 980, essentially trading blows.

View Full Size

The GTX 960 falls behind AMD's Radeon R9 380 though the R7 370 and the GTX 950 are evenly matched. But I'm thinking you won't want to play Fable Legends at 11.1 FPS...

View Full Size

There are some definite movements here as we measure the 95th percentile frame times at 4K. First, note that the advantage of the GTX 980 Ti over the Fury is nearly gone, dropping to around 4%; this tells us that there are more SLOWER frames (compared to the average) on the GTX 980 Ti than there is on the Fury X, and that is clearly shown in the frame time graphs (static or interactive) below.

View Full Size

Maxwell's advantage over Fiji and Hawaii continues with Global Illumination for the GTX 980 Ti / GTX 980 results though standard lighting and transparency calculations run faster on the Radeon cards.

As we showed you on the 1080p results page, we have something different for you here. if you want the standard frame time, static graphs, simply scroll down to view the AMD vs GPU comparison as you are used to seeing them. However, if you want to try out something new, click here:

Tips:

  • Click names of GPUs in top legend to add/remove them from the graph
  • Highlight a portion (or pinch/zoom) of the graph to zoom in
  • Hold shift and click/drag (or touch drag) to pan in zoomed view

View Full Size

View Full Size

View Full Size

View Full Size

Right away in that first frame time graph you can see the additional variance and larger amount of slow frame times on the GTX 980 Ti compared to the R9 Fury X. Even though the average frame rate is higher with the GeForce card, it would be easy to say that the experience between the two products is much more evenly matched. Even with the bottom two comparisons (GTX 960 vs R9 380 and GTX 950 vs R7 370) AMD has an advantage when it comes to consistency of frame times, even if its at frame rates that no one would want to play at.


September 24, 2015 | 09:05 AM - Posted by willmore (not verified)

DX12 only? Nope.

September 24, 2015 | 09:10 AM - Posted by willmore (not verified)

That said, it was a good article, thanks for that! I really liked your comments on the "Looking Forward" section. This game not existing in a DX11/DX12 split does lessen the usefullness of it as a means to show the benefits of DX12.

September 24, 2015 | 10:57 AM - Posted by Anonymous (not verified)

The game engine is UE4, so it does support DX11/DX12.

How many games used UE3 last generation? You can fully expect this to be more representative of future games.

September 24, 2015 | 11:23 AM - Posted by NetTom (not verified)

Not anymore... UE4 will not be that successful in PC. It is a mobile focused engine now.

September 24, 2015 | 11:25 AM - Posted by Anonymous (not verified)

Nice try with the fud.

September 24, 2015 | 11:43 AM - Posted by Anonymous (not verified)

hahaha soo much hate and fud for UE4.
Get with the program because there are already a massive amount of games being developed on UE4 with DX12 in mind!

September 24, 2015 | 04:30 PM - Posted by NetTom (not verified)

OK. Waiting for you list of high profile games using it... :)

September 24, 2015 | 05:16 PM - Posted by Anonymous (not verified)

here you go uninformed pleb
https://www.youtube.com/watch?v=Ymbb8EPDLRo

September 24, 2015 | 05:26 PM - Posted by NetTom (not verified)

Yet. Almost non of them is high profile game, most is indie game and not DX12 in mind. Well UE4 itself is not DX12 in mind... if you doubt read about it. MS is writing its own DX12 code!! :P

But thanks for the confirmation that nothing interesting here to expect. :)

On the other hand I can tell you, that the major real! DX12 titles will use their own engines... and there is a reson for this. ;)

September 25, 2015 | 05:45 AM - Posted by Anonymous (not verified)

Have fun NOT playing any games then. Me and the majority of gamers will enjoy all these titles.

September 25, 2015 | 05:46 AM - Posted by Anonymous (not verified)

If you havent seen any high profile games in that video you are fucking blind. Get lost troll jeez...

September 25, 2015 | 05:54 AM - Posted by Anonymous (not verified)

https://wiki.unrealengine.com/Category:Games

September 26, 2015 | 12:47 AM - Posted by Anonymous (not verified)

That's a better list, to be fair some of the stuff in that unreal sizzle video were ue3. Arkham Knight and Killing Floor 2 were in that video and used ue3 that the respective studios added alot to. May have been more but those 2 are 100% not ue4.

September 24, 2015 | 01:47 PM - Posted by Anonymous (not verified)

thats what people said about DX11

September 30, 2015 | 04:18 PM - Posted by praack

true enough, but the big issue is when a game states it cannot run and like fable is coming from the microsoft camp.

and then with a software tweak can run on earlier cards.

if there was something critical with DX12 to keep the package from running that would be something- but in reality it is just to force the upgrade package for Windows 10 and the associated vendor graphic card purchase.

September 24, 2015 | 09:14 AM - Posted by Heavy (not verified)

time to see the flame war.this is going to be fun now go guys feel the hate with in you

September 24, 2015 | 10:16 AM - Posted by obababoy

Seems like you are the only one so far that wants to flame..Go away or bring something useful to the comments.

September 24, 2015 | 12:11 PM - Posted by Heavy (not verified)

yes my child feel the hate flow through you

September 24, 2015 | 09:34 AM - Posted by killurconsole

good article , sexy graph.

did the game ever use more that 4GB(VRAM) ?

September 24, 2015 | 09:58 AM - Posted by ku4eto (not verified)

Yea, GPU + CPU load graphs would have been good.

September 24, 2015 | 03:19 PM - Posted by Darthbreather (not verified)

1440p Results would have been nice as well, especially for the high end Cards. Was surprised to see no R7 360, given it is GCN 1.1.

September 24, 2015 | 10:13 AM - Posted by obababoy

Ryan,

Curious how your benchmarks show Nvidia with a lead on the AMD on the 980ti/FuryX comparison, but on https://www.extremetech.com/gaming/214834-fable-legends-amd-and-nvidia-g... They show them with no difference. Not arguing a bias, just curious because only differences between systems would be the CPU. You guys use the 6700 and they used some X99 CPU.

How many runs did you guys do because they noticed inconsistencies between runs so they averaged a bunch?

September 24, 2015 | 10:29 AM - Posted by Daniel Meier (not verified)

Noticed an AMD lead on OC3D as well, so it seems the numbers are a bit mixed.

September 24, 2015 | 10:31 AM - Posted by Daniel Meier (not verified)

Scrap that OC3D got their numbers from an Extremetech article. But still shows AMD ahead, not by much but ahead.

September 24, 2015 | 10:34 AM - Posted by Ryan Shrout

Each test was run AT LEAST four times and results were not quite averaged, but we were looking for variance as we were warned it might be there. I will say that the variance I saw was very low - in the 100-150 points range at most - and people using CPUs with higher core counts than 4 seemed to be more likely to see major swings.

I am very confident in our results here.

September 24, 2015 | 10:46 AM - Posted by obababoy

Thanks for the explanation. I just found it interesting that the core count made the difference...The benchmark appears to make my R9 290 a still very relevant mainstream card with great value. Really appreciate your work here!

September 24, 2015 | 12:13 PM - Posted by Anonymous (not verified)

Given that little variance, why does the combined score in the 4K video show 3786? Was something different as it is higher than all the other results by a few hundred points.

September 24, 2015 | 02:25 PM - Posted by ppi (not verified)

Updated AMD drivers?

(Funny how both AMD and nVidia release drivers just for pre-beta benchmarks :-) )

September 24, 2015 | 08:10 PM - Posted by Azix (not verified)

The extremetech results were provided by AMD but, as ET explained, they are valid. They actually had lower results than AMD for the 980. The difference as it often apparently is, was the clock speeds. Some 980s are faster, some slower. Which is why reviewers should always mention clock speeds IMO.

September 24, 2015 | 09:32 PM - Posted by Osjur (not verified)

Did you use custom cards in your testing?

September 24, 2015 | 09:32 PM - Posted by Osjur (not verified)

Did you use custom cards in your testing?

September 24, 2015 | 11:07 AM - Posted by Anonymous (not verified)

"Unreal 4 engine — and Nvidia and Epic, Unreal’s developer, have a long history of close collaboration."

From the extreme tech article. So there will be lots of BETA versions testing that will bring up more questions, but AOS is RTM. It's time to pay more attention to what information a review omits, as that will be very important. The more reviews read the better the overall picture of actual performance will be. with an "*" on this game until RTM.

*BETA version compared to RTM(AOS).

"The Fable Legends benchmark is both surprisingly robust in the data it provides and also very limited in the configurability the press was given. The test could only be run in one of three different configurations:"

This is not the full benchmark, as the configurability was limited, more questions the must be resolved after the Game is RTM.

The entire Graphics API software stack is turning over with both DX12's, and Vulkan's release, and until things stabilize and more games are tested in their RTM versions, and until after the New Graphics API have been out/RTM for 6mo things are going to be in a state of flux. Most testing on BETA versions should be taken with a large amount of skepticism.

October 14, 2015 | 01:23 AM - Posted by SiliconDoc (not verified)

Curious how you don't mention that at your link the system ran in low cpu mode, they caught the problem, crankeed the system to high power and 3.3ghz instead of 1.2ghz, AND NVIDIA GAINED 27% WHILE AMD ONLY GAINED 2% - at the normal maxxed cpu speed.

Then the article mentions (you didn't READ IT !) that their nvidia gpu is clocked lower than one used at tech site x that had higher NVidia results.
,
So I guess that little ignorant attack on this website was a complete failure, looking into it.
Thanks for playing and you're welcome.

Now, my position is the 980 and 980ti are excellent overclockers compared to the amd cards. So....AMD had better have at least a 33% better price, some free games, and a darn good warranty, or I don't want it.
NVidia has so many more features, so much more fun packed in, and less power usage.

I feel bad for AMD, they need to be 10%-20% faster given their lack in the software department, and they should be, but they are not.

October 14, 2015 | 01:24 AM - Posted by SiliconDoc (not verified)

Curious how you don't mention that at your link the system ran in low cpu mode, they caught the problem, crankeed the system to high power and 3.3ghz instead of 1.2ghz, AND NVIDIA GAINED 27% WHILE AMD ONLY GAINED 2% - at the normal maxxed cpu speed.

Then the article mentions (you didn't READ IT !) that their nvidia gpu is clocked lower than one used at tech site x that had higher NVidia results.
,
So I guess that little ignorant attack on this website was a complete failure, looking into it.
Thanks for playing and you're welcome.

Now, my position is the 980 and 980ti are excellent overclockers compared to the amd cards. So....AMD had better have at least a 33% better price, some free games, and a darn good warranty, or I don't want it.
NVidia has so many more features, so much more fun packed in, and less power usage.

I feel bad for AMD, they need to be 10%-20% faster given their lack in the software department, and they should be, but they are not.

September 24, 2015 | 10:45 AM - Posted by Anonymous (not verified)

So as expected the whole Ashes thing was blown out of proportion by AMD, its fanboy's, and sites like PCPer that reported it as if DX12 was going to be the end for nVidia.

September 24, 2015 | 10:48 AM - Posted by obababoy

You are the epitome of the awfulness you THINK is all around you...Everyone hates you :)

Ashes is a totally different type of game and of course anyone should take it with a grain of salt because it is new and not out yet, but it could still be a title that shines with AMD products.

September 24, 2015 | 10:50 AM - Posted by Anonymous (not verified)

We already know it will as Oxide is a partner with AMD.

Once again, Ashes will not be representative of the majority of DX12 titles.

October 3, 2015 | 05:23 AM - Posted by Jules Mak (not verified)

Stop your FUD right there!

Oxide games is NOT partnering with AMD. Oxide games's publisher Stardock is a partner with AMD.

However! Stardock does also have a partnership with NVIDIA for gpu driver update a.k.a Impulse NVIDIA Edition

September 24, 2015 | 11:11 AM - Posted by Hellowalkman (not verified)

The AOTS was a very cpu intense benchmark and so the AMD cards got a huge boost going from Dx11 to Dx12 . This game is more gpu bound .

September 24, 2015 | 11:59 AM - Posted by ppi (not verified)

Actually, in AotS, nVidia DX11 vs AMD DX12 has show very similar results as this.

What was surprising was DX11->DX12 jump at AMD vs. nVidia, but frankly, it is clear AMD did not optimize for AotS DX11 at all.

September 24, 2015 | 03:52 PM - Posted by arbiter

You also forget that it uses Async compute which was a locked AMD only tech til recently. You can argue and claim it wasn't but it was. As I have dozen times before it would been like Physx being added as a standard in DX12, AMD wouldn't support it right either. Async is optional not required part.

This game is more neutral in terms of tech used as it didn't use a tech from 1 side of the other.

September 24, 2015 | 07:07 PM - Posted by Anonymous (not verified)

Asynchronous compute is not owned by anyone, lots of processors use Asynchronous compute in their hardware, Intel, ARM(Mali, ARM based CPUs), AMD(CPUs and GPUs), IBM(Power8, and others). Its Just that Nvidia has been on a gimping spree with its consumer GPU SKUs! Now it's coming back to bite Nvidia In the A$$, and you are always spouting the same FUD and hoping it will take!

Face it HSA style Asynchronous compute fully in the GPU's hardware is here to stay, and Future GPUs/SOCs from many of the HSA foundations members will have more GPU compute going forward! Also expect FPGA's and DSP's and any other processing hardware to be available for Hardware Asynchronous compute, in spite of Nvidia's attempts at gimping their consumer GPU's hardware resources for extra profit milking. CPUs are not the only source of computing power, and have you ever tried to game on a CPU without the help of a GPU. AMD's GPUs will continue getting even more Hardware Asynchronous compute abilities in their Arctic Island micro-architecture based GPUs.

Expect that AMD's future HPC/workstation SKUs will begin to make inroads into the market with the on interposer GPU as a computational accelerator wired more directly to the CPU via an interposer and that those ACE units will be able to run more code on their own without the need of assistance from the CPU.

September 25, 2015 | 02:51 AM - Posted by JohnGR

Locked AMD only tech? LOL!!! What did I just read? Are you serious. Please say no. :D

September 26, 2015 | 03:20 PM - Posted by Anonymouse (not verified)

Yes he is serious, and always has been. IMO, his posts and avatar explain each other.

September 25, 2015 | 03:23 AM - Posted by Ty (not verified)

async compute was not locked to amd, nvidia just doesn't have a good implementation of it in maxwell. Read this to get a good summation.

https://forum.beyond3d.com/posts/1872750/

It's probably the case that most of the time nvidia would be better off not using that on their maxwell parts because other methods will work better on their hardware.

September 25, 2015 | 02:49 PM - Posted by funandjam

"...Async compute which was a locked AMD only tech til recently. "

lol, you just made me blow soda out of my nose I was laughing so hard!

October 3, 2015 | 05:33 AM - Posted by Jules Mak (not verified)

a locked AMD only tech?

Async compute is a concept adapted by the HSA foundation. Free to use with no loyalties.

Tusk, tusk....Such thing is complete alien to you ngreedya patent trolls.

September 24, 2015 | 11:57 AM - Posted by ppi (not verified)

O'RLY? Maybe 980Ti vs. FuryX ... but:

That 390X > 980 is normal DX11 business as usual, right?
That 380 > 960 is normal DX11 business as usual, right?

And that is both in FPS and 95-99 percentile (check AnandTech and TechReport as well).

September 24, 2015 | 08:12 PM - Posted by Azix (not verified)

a 380 is typically faster than a 960 iirc. 390x beats 980 mostly at higher resolutions in some games. Consistently beating it is not normal but this is one benchmark.

September 24, 2015 | 01:12 PM - Posted by Anonymous (not verified)

blown out of proportion no, fact is any company that is and has always been biased towards Nvidia will always do everything they can to increase performance with their hardware even if it means and sometimes forces reduction in performance to competitions hardware/software i.e AMD.

There is NOTHING that is fully DX12 compliant, however, AMD is FAR more complete in that regard with nearly all products released from Radeon 7k GCN and up, Nvidia is far more fractured through their entire lineup for supporting DX12 bits and pieces facts are facts.

This is for the same reason that they did not fully support dx10-11 they make loads of $ off blind fanbase, why would they bother actually giving the full ability to their product when it would mean a slight boost in cost to give it, might as well wait till next generation to force buys to buy once more for a bit more support.

AMD has a massive advantage at this point with DX12, and using just 1 game that is funded and tweaked by a company biased towards them is not proof of anything other then foul play.

September 24, 2015 | 03:55 PM - Posted by arbiter

Its funny you say game biased, as if AOTS wasn't biased in AMD's favor? Funny how short termed AMD fanoyz memories are. AMD's drivers for DX11 suck BAD get over it. its not cause a game company biased against them, its AMD's fault. Stop tring to shift blame, you sound like an AMD PR rep.

September 24, 2015 | 07:36 PM - Posted by Anonymous (not verified)

AOTS shows that Full Hardware Asynchronous compute support, will benefit gaming by moving more of the work to the GPU, and saving a lot off of the intrinsic latency that comes from CPUs having to communicate with the discrete GPUs over PCIe. The more code that can be run on the GPU the better, because GPUs have vastly superior numbers of cores than any CPUs, and GPUs with ACE/other fully in the GPUs hardware Asynchronous compute support will not need the CPU's help as much. The more running on the GPU the less latency there will be in gaming.

I say put all of the game on the GPU and make those ACE units able to run all of the game's code, with the CPU there for running the OS and other chores. The Future AMD APUs on an Interposer will be able to host a High End GPU and more directly wire up(Thousands of traces) from the CPU to the GPU, and other processing DIEs on the interposer, including the other wide traces/channels to the HBM, and even AMD's HBM will be getting some extre FPGA processing going on with an FPGA added to the HBM stack sandwiched between the bottom HBM memory controller logic chip and the HBM memory dies above.

Stop trying to apologize for Nvidia's lack of foresight and greed, and try the get Nvidia on the path towards full in the GPU's Hardware Asynchronous compute support. The Gimping of compute will not pay off in the future for gaming. You sound like Nvidia's damage control! AMD, and others will be making more use of GPU/DSP/FPGA/other Hardware Asynchronous Compute, from the mobile market, to the HPC/workstation/supercomputer market!

September 25, 2015 | 02:49 AM - Posted by JohnGR

You seems that you like to ignore the fact that only GM200 can fight AMD. AMD is winning in all categories against GM204. Fable Legents is the proof everyone wanted after AotS that AMD is back.

September 25, 2015 | 06:24 AM - Posted by Anonymous (not verified)

Enjoy blind fanboy!
AMD desperately needs a hit product line to lift its ailing computing and graphics division, which has been crushed between Intel's x86 chips in PCs and NVIDIA's (NASDAQ:NVDA) add-in graphics boards. AMD's market share in add-in boards fell from 22.5% to 18% between the second quarters of 2014 and 2015, according to JPR. During that period, NVIDIA's share rose from 77.4% to 81.9%.

Last quarter, AMD's computing and graphics unit's revenue declined 54% annually as its operating loss widened from $6 million to $147 million.

September 25, 2015 | 01:29 PM - Posted by JohnGR

NVIDIA's (NASDAQ:NVDA)?

So, you are not just an Nvidia fanboy but also a shareholder.
Thank you for the information.

September 30, 2015 | 06:47 PM - Posted by Benchen (not verified)

I think next year nvidia will have trouble.

September 25, 2015 | 06:25 AM - Posted by Anonymous (not verified)

But based on AMD's weak position in both markets, it's likely that its top- and bottom-line losses in computing and graphics will keep piling up.

September 25, 2015 | 03:07 AM - Posted by Ty (not verified)

I would not celebrate just yet, nvidia got spanked across the gpu line at every price point except their high end card.

a freaking 390 is in the ballpark with the 980, and a 390x spanks it

280 beats the 960 handily, it's a win across the board except for the 370, but who wants that thing?

I wonder if any of the tech press throwing out their disappointment at amd just rebranding cards and relaunching them will either dial it back or publicly eat crow? Turns out those 2014 hawaii cards are besting the newer 970/980 pretty clearly. But at least nvidia has the crown with the 980ti... at least until the real game gets launched and we actually see more spell effects with post processing show up and greater use of async compute help narrow/close/open up gaps in amds favor.

September 25, 2015 | 03:25 AM - Posted by Ty (not verified)

*** 2013 Hawaii cards

October 14, 2015 | 01:29 AM - Posted by SiliconDoc (not verified)

The AMD fanboys fall for it every time.

Soon the new phantom future AMD winning "tech/thing" will be slithered and slathered into the web announcing the day when the raging hate filled red freaks "destroy the competition!"

In the mean time, rinse, repeat, retreat, retort, RETARDED.

September 24, 2015 | 10:51 AM - Posted by adogg23

I like fps to be high sometimes, its the best. Does this game come with all 8 gpu's installed in the box? Call me?

September 24, 2015 | 10:59 AM - Posted by Ryan Shrout

Best comment ever.

September 24, 2015 | 11:22 AM - Posted by adogg23

Yes.

September 24, 2015 | 11:07 AM - Posted by nevzim (not verified)

2560?

September 24, 2015 | 01:22 PM - Posted by remon (not verified)

Seriously, this. All the benchmarks are either 1080 or 4k.

September 24, 2015 | 01:22 PM - Posted by remon (not verified)

Seriously, this. All the benchmarks are either 1080 or 4k.

September 24, 2015 | 11:11 AM - Posted by mtcn77

Mr. Shrout, the percentile charts were very graphic in leading the editorial benchmark forward. I ask of you to continue demonstrating them in your future reviews.

September 24, 2015 | 11:50 AM - Posted by Keven Harvey (not verified)

It's weird that the 980Ti pulled ahead of the fury X even more at 4K, we usually see the opposite. How much vram was the benchmark using? More vram usage might also explain why the 390X got ahead of the 980 by that much.

September 24, 2015 | 12:17 PM - Posted by Anonymous (not verified)

Question: According to the Intro page, the benchmark supports DX12's ASync Compute functionality - The same functionality which sparked the Ashes of Singularity controversy, yes?

What we learned from that controversy is that AMD's drivers already have ASync support, while Nvidia's Maxwell DOES also support ASync(we don't know just how well in practice yet, but still), but the drivers are still on their way.

Does that mean we are comparing AMD's ASync enhanced performance Vs. Nvidia's raw, serial performance? Would that not mean a further boost is in the pipe for Maxwell cards?

September 24, 2015 | 12:46 PM - Posted by Hellowalkman (not verified)

the game isn't using async compute in pc .. it is enabled only in xbox ..

September 24, 2015 | 12:54 PM - Posted by Anonymous (not verified)

This is sad, with no async shader support PC performance is artificially held back AGAIN to make nvidia look better, even with DX12!

If microsoft themselves ENCOURAGE this type practice even on published features, then ... well.. forget it AMD you can't fight microsoft.

September 24, 2015 | 01:31 PM - Posted by Anonymous (not verified)

MSFT cannot force to use X features in DX12, rather for them to have allowed Nvidia to claim "full support" that is a blatant lie period. Async etc are features, MSFT cannot force folks to use this unless it was places as a requirement via hardware to do DX12 at all, which it is not.

Hell for many years since the Radeon 2k series they had tesselators built in, but it was not till DX11 that MSFT made it mandatory via hardware with specific attributes which favored Nvidia and allowed Intel to emulate it. Basically anything prior to Radeon 5k was unable to do it without having an inherent advantage vs competition.

I think this is why they made mandatory changes etc with some of these features for DX11-12 to not artificially limit performance or force those in position that they would not get good results example would be Async which is more or less superb on AMD via hardware where others to my knowledge have to do software workarounds never a good thing, in this regard they should have put a context switch on things to lock out oh I dont know game works optimizations, PhysX etc as they artificially tilt the table to make 1 product look great while shafting everyone else performance/quality wise.

Anyways, DX12 is a great step forward provided MSFT does step in and make sure some underhanded tactics are not made to supersede the benefits of all that extra hardware/software code requirements, but then again, Win10 stands in the way of nearly everything with the way it was built/designed which will destroy consumer/business operations everywhere if allowed to remain as it is.

September 24, 2015 | 06:49 PM - Posted by ppi (not verified)

While I am in no position to say whether asynch shaders are in or not, the fact that 390X is ahead of 980 by about the same margin as 980 tends to be ahead of 390X in DX11 games (same for 380 vs 960) ...

... it is clear that those old AMD Rebrandeons love DX12, even with UE4. Therefore, I would put those conspiracy theories aside for now.

Surprisingly enough, Radeons have better 95-99 percentiles compared to FPS (vs. nVidia counterparts), while in DX11 it tends to be the other way around.

September 24, 2015 | 01:21 PM - Posted by Anonymous (not verified)

Nvidia can support via software if they choose, not hardware, as it is more or less HSA/Mantle derived code, just like PhysX is Nvidia specific and locked code. Nvidia may give great performance going from say 400-500-600-700-900 series HOWEVER they also chopped much things out and did not bother putting certain things in, this is why the power requirements have dropped so much though performance for many things stayed ahead.

You cannot chop so many things out and expect awesome performance in every regard, this is part of the reason why AMD cards "appear" to use more power, they are more "complete" if you will be adding more on the die/card, which takes more power, but also gives option to use X without having to do psuedo workarounds.

Nvidia is the master or proprietary crap, ripping and chopping, not adding, screwing with, reduction in quality etc, just to serve their shareholders NOT consumers at all.

Point is, any game/app that uses DX12/Vulkan that is not biased towards any one company at this point Radeons especially GCN 1.2 based will absolutely demolish Geforce cards, yes they use more power to do it, but more performance has a cost, for radeon this cost gives better build quality and higher absolute power use.

Muscle cars always used more fuel then light weight exotics :)

September 24, 2015 | 01:52 PM - Posted by Anonymous (not verified)

Nvidia is the Green Goblin Gimper that removes more compute from their consumer SKUs! Nvidia has steadily removed compute from their consumer SKUs and can only implement Asynchronous compute in software, but not truly in hardware. Nvidia even spun this as power savings to feed this line to the pea brained Gits that overspend on Nvidia's pricy kit. Enough of the Gimping on consumer GPU SKUs, Green Goblin, the new graphics APIs are able to make use of the hardware based Asynchronous compute in GPUs now, so that Green Goblin Gimping for more greenbacks at the expense of Hardware Asynchronous Compute will no longer be profitable.

Most new software including open source software will make use of Hardware Asynchronous Compute and people will want to make use of the GPU for more uses other than gaming, and the GPU will be use for more all around software acceleration going forward. Non gaming Graphics(and now Gaming for games too) software makes heavy use of Hardware Asynchronous Compute to render workloads in minutes that would take the CPU hours to complete. So no more of the Gimping of compute as it will no longer be an advantage to the Green Goblin and its minions of green slimes!

October 14, 2015 | 01:39 AM - Posted by SiliconDoc (not verified)

disappointed with amd's crap performance again, means the amd fan must attack Nvidia in a vitriolic paranoid rage, you performed well, feel the hatred surging through you, now raise the red dragon card and strike again at the enemy, and your journey to the bankrupt failing dark side will be complete !

October 14, 2015 | 01:36 AM - Posted by SiliconDoc (not verified)

wow all that conspiracy fanboy rage fud blown about still...

amd is a crumbling broken empty shell...yet the raging hate filled fanboys survive

hope they gave you some free amd crap

September 24, 2015 | 12:25 PM - Posted by Anon (not verified)

Ryan, did you used the EVGA 980Ti or reference 980 Ti?

September 24, 2015 | 01:05 PM - Posted by zMeul (not verified)

I'm quite interested in VRAM usage
why is this not tested!?!?!??!?!

THIS SHOULD BE TESTED!

September 24, 2015 | 01:06 PM - Posted by Anonymous (not verified)

Why didn't you test the most popular upper mid GPUs R9 390 and gtx 970?

September 24, 2015 | 02:12 PM - Posted by Gumballdrop (not verified)

Wow AMD killed it in the benchmarks even though they have the older tech

September 24, 2015 | 04:37 PM - Posted by Anonymous (not verified)

This doesn't really look that dissimilar compared to AotS, but we can't really tell without testing the DX11 code path. We may find that AMD suffers very low frame rates on DX11, but DX11 isn't going to be very relevant in this case. The low end DX12 cards in this review are at low enough frame rate that it would be unplayable at those settings. Are there going to be any DX11 limited cards which will be able to play this at a reasonable frame rate without turning the settings down to low quality?

It is interesting to see the 380 doing so well since it is based on such an old design. The 390x looks like it may be the best deal currently. Wonder if that is partially due to 8 GB of memory.

September 24, 2015 | 04:56 PM - Posted by Anonymous (not verified)

UPDATE: It turns out that the game will have a fall-back DX11 mode that will be enabled if the game detects a GPU incapable of running DX12.

So all cards were verified to be running DX12 ?

September 24, 2015 | 05:44 PM - Posted by Anonymous (not verified)

Is it just me, or AMD cards overall looks less jittery than Nvidia. Maybe checking 99th percentile could show this up too, or some other metric, like moving standard deviation or something.

September 24, 2015 | 05:44 PM - Posted by Anonymous (not verified)

Is it just me, or AMD cards overall looks less jittery than Nvidia. Maybe checking 99th percentile could show this up too, or some other metric, like moving standard deviation or something.

September 24, 2015 | 07:57 PM - Posted by Anonymous (not verified)

Not just you.

These graphs demonstrate AMD cards give a smoother running game experience.

September 24, 2015 | 06:16 PM - Posted by mLocke

So when's Black & White 3 coming out?

September 24, 2015 | 07:57 PM - Posted by semiconductorslave

I just bought a R9 290X with 8GB Ram for $300, and to see that I will be able to run at above a $450-$500 GTX 980 is OK with me.

September 24, 2015 | 09:35 PM - Posted by pat182 (not verified)

fill your 290x to 8gb and tell me your fps lol

September 24, 2015 | 09:57 PM - Posted by semiconductorslave

@pat182

?

September 25, 2015 | 05:58 AM - Posted by Anonymous (not verified)

lmfao

October 14, 2015 | 01:43 AM - Posted by SiliconDoc (not verified)

the fantasy the amd fans entertain is endless, it's just so surprising

September 30, 2015 | 06:54 PM - Posted by Benchen (not verified)

Good buy

September 25, 2015 | 12:54 AM - Posted by General Lee (not verified)

I see this game being more of a PR thing for Microsoft and DX12 with just a few token attempts at using DX12 features that are largely inconsequential. In essence it's just another UE4 game, and if Lionhead has done any significant optimization effort, it's gone to the Xbox One version. The lack of DX11 option should be enough to tell that there's probably no major benefits since the PC port is not bottlenecked by draw calls or utilizing things like asynchronous shaders. According to UE4 documentation AS support was added by Lionhead Studios to he Xbox version, with no support for PC.

Still, it's a pretty looking game and probably representative of a lot of UE4 titles in the near future. A lot of games like these weren't really being bottlenecked by DX11 in the first place. I'm guessing most of the early DX12 titles will have this sort of minor tweaks and the devs are just testing things out, and it'll take a year or two before we start seeing engines make real use out of DX12, especially on PC.

September 25, 2015 | 03:02 AM - Posted by semiconductorslave

Luddites!

September 25, 2015 | 05:59 AM - Posted by Anonymous (not verified)

Yep 100% agree

September 25, 2015 | 07:55 AM - Posted by Anonymous (not verified)

Async Compute is NOT utilized in current UE4 engine, except for the Xbox One.

Ref: https://docs.unrealengine.com/latest/INT/Programming/Rendering/ShaderDev...

September 25, 2015 | 08:33 PM - Posted by StephanS

This makes the r9-290x look like a monster value if more dx12 title turn out this way.
Less then Half the price of a GTX 980, and faster at 1080p and 4K
(and better frame times)

Other sites did test the GTX 970, and its not looking good kids...
Many people really overpaid for much less capable HW then hawaii.
(but at least its rocks in dx11 titles)

September 26, 2015 | 04:56 PM - Posted by RooseBolton

Ryan, I suppose the benchmark does not support SLI or crossfire or you would have tested them?

September 27, 2015 | 10:55 AM - Posted by Anonymous (not verified)

LOVE the interactive graphs

September 27, 2015 | 10:11 PM - Posted by PCz

This is definitely a better benchmark to judge future performance of DX12. Fable Legends uses Unreal Engine 4. Hundreds of last gen games were made with Unreal Engine 3, from Bioshock Infinite, Gears of War, Hawken, Borderlands, XCOM, etc. And probably hundreds more games will be made with Unreal Engine 4, which has DX12 support built in. The Ashes of the Singularity game engine isn't likely to be widely used beyond a few games, so this seems to be a much better indicator of future DX12 performance in most games. Also Unreal Engine 4 is very indie friendly, so this should apply to more than just AAA games.

September 28, 2015 | 08:11 AM - Posted by Anonymous (not verified)

UE4 is FREE!

September 28, 2015 | 09:47 PM - Posted by PCz

Free!*

*You pay a 5 percent royalty on gross revenue after the first $3,000 per product, per quarter.

September 28, 2015 | 05:29 PM - Posted by Ramon (not verified)

"It turns out that the game will have a fall-back DX11 mode that will be enabled if the game detects a GPU incapable of running DX12."

Give me DX11 VS DX12, now!!!

October 6, 2015 | 05:52 AM - Posted by Anonymous (not verified)

Any update on whether the catalyst driver 15.9.1 actually improve the performance of the AMD cards??

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.