Review Index:
Feedback

Updated: Rise of the Tomb Raider: AMD and NVIDIA Performance Results

Author:
Manufacturer: Various

Early testing for higher end GPUs

UPDATE 2/5/16: Nixxes released a new version of Rise of the Tomb Raider today with some significant changes. I have added another page at the end of this story that looks at results with the new version of the game, a new AMD driver and I've also included some SLI and CrossFire results.

I will fully admit to being jaded by the industry on many occasions. I love my PC games and I love hardware but it takes a lot for me to get genuinely excited about anything. After hearing game reviewers talk up the newest installment of the Tomb Raider franchise, Rise of the Tomb Raider, since it's release on the Xbox One last year, I've been waiting for its PC release to give it a shot with real hardware. As you'll see in the screenshots and video in this story, the game doesn't appear to disappoint.

View Full Size

Rise of the Tomb Raider takes the exploration and "tomb raiding" aspects that made the first games in the series successful and applies them to the visual quality and character design brought in with the reboot of the series a couple years back. The result is a PC game that looks stunning at any resolution, but even more so in 4K, that pushes your hardware to its limits. For single GPU performance, even the GTX 980 Ti and Fury X struggle to keep their heads above water.

In this short article we'll look at the performance of Rise of the Tomb Raider with a handful of GPUs, leaning towards the high end of the product stack, and offer up my view on whether each hardware vendor is living up to expectations.

Continue reading our look at GPU performance in Rise of the Tomb Raider!!

Image Quality Settings Discussion

First, let's talk a bit about visuals, image quality settings and the dreaded topic of NVIDIA GameWorks. First, unlike the 2013 Tomb Raider title, Rise of the Tomb Raider is part of the NVIDIA "The Way It's Meant To Be Played" program and implements GameWorks to some capacity. 

As far as I can tell from published blog posts by NVIDIA, the only feature that RoTR implements from the GameWorks library is HBAO+. Here is how NVIDIA describes the feature:

NVIDIA HBAO+ adds realistic Ambient Occlusion shadowing around objects and surfaces, with higher visual fidelity compared to previous real-time AO techniques. HBAO+ adds to the shadows, which adds definition to items in a scene, dramatically enhancing the image quality. HBAO+ is a super-efficient method of modeling occlusion shadows, and the performance hit is negligible when compared to other Ambient Occlusion implementations.

The in-game setting allow for options of Off, On and HBAO+ on all hardware. To be quite frank, any kind of ambient occlusion is hard to detect in a game while in motion, though the differences in still images are more noticeable. RoTR is perhaps the BEST implementation of AO that I have seen in a shipping game and thanks to the large open, variably lit environments it takes place in, seems to be a poster child for the lighting technology.

That being said, in our testing for this story I set Ambient Occlusion to "On" rather than HBAO+. Why? Mainly to help dispel the idea that the performance of AMD GPUs is being hindered by the NVIDIA GameWorks software platform. I'm sure this won't silence all of the conspiracy theorists, but hopefully it will help.

Other than that, we went with the Very High quality preset, which turns out to be very strenuous on graphics hardware. If you don't have a GTX 980 or R9 390 GPU (or better), chances are good you'll have to step down some from that even at 2560x1440 or 1920x1080 to get playable and consistent frame times. Our graphs on the following pages will demonstrate that point.

Testing Setup

For this short sample of performance we are comparing six different graphics cards with matching prices points from AMD and NVIDIA.

  • $650
    • NVIDIA GeForce GTX 980 Ti 6GB
    • AMD Radeon R9 Fury X 4GB
  • $500
    • NVIDIA GeForce GTX 980 4GB
    • AMD Radeon R9 Nano 4GB
  • $350
    • NVIDIA GeForce GTX 970 4GB
    • AMD Radeon R9 390 8GB

View Full Size

I tested in an early part of the Syria campaign at both 2560x1440 and 3840x2160 resolutions, both of which were hard on even the most expensive cards in the comparison. Will the 6GB vs 4GB frame buffer gap help the GTX 980 Ti in any particular areas? How will the R9 390 with 8GB of memory compare to the GTX 970 with 4GB configuration that has long been under attack

This also marks the first use of our updated GPU testbed hardware, seen in the photo above.

  PC Perspective GPU Testbed
Processor Intel Core i7-5960X Haswell-E
Motherboard ASUS Rampage V Extreme X99
Memory G.Skill Ripjaws 16GB DDR4-3200
Storage OCZ Agility 4 256GB (OS)
Adata SP610 500GB (games)
Power Supply Corsair AX1500i 1500 watt
OS Windows 10 x64

January 29, 2016 | 04:55 PM - Posted by RGSPro (not verified)

$39.42 at Green Man Gaming with code 27RISE-OFTOMB-RAIDER :)

January 29, 2016 | 10:12 PM - Posted by Penteract (not verified)

GMG is a great alternative to the much-celebrated Steam site for purchasing games, as well as GOG.com (from the original "Good Old Games", though these days the games aren't necessarily "old"). Steam rarely offers discounts to pre-release and new titles. And there are other game etailers taking advantage of digital distribution to sell games at a discount as well, so it pays to shop around!

January 31, 2016 | 02:44 PM - Posted by djotter

USD$38.44 on Kinguin.

January 29, 2016 | 05:01 PM - Posted by zMeul (not verified)

I'm confused about the frame time spikes on the Fury X and R9 Nano
you should look at VRAM usage

January 29, 2016 | 05:23 PM - Posted by Ryan Shrout

Yeah, I'll try to this weekend as well. Note though that the GTX 980 has 4GB of memory and the GTX 970 has 4GB (more or less) and they don't exhibit the same behavior.

January 29, 2016 | 06:16 PM - Posted by Familytechguy (not verified)

My bet is compression. Thats the main difference between the 390 arch and fiji. Try a tonga card aswell.

January 29, 2016 | 07:20 PM - Posted by Anonymous (not verified)

The compression is a very low level effect, unless it has to do some prep work when it brings new data into local memory. This frame time spike seems to hit about every 6 seconds or so. If there was an issue with the color or other compression, I would expect a more even effect. Five or six seconds is a long time from the graphics card perspective. Given the time scales involved, I would expect some memory management issue or perhaps some issue when bringing data in from system memory. It is unclear why this would happen with Fiji based devices, but not Hawaii based devices. They have the same memory sizes and the same driver. There is a possibility it could be some effect due to HBM characteristics (different latency characteristics and such), but that also seems unlikely given the time span. I would expect latency characteristics to cause effects within the processing of a single frame, not every couple hundred frames. They may not be able to figure it out without getting help from AMD. Tonga performance characteristics will be interesting since it is GCN 1.2 also. That might tell us whether it is related to the GCN revision, or if it is related to HBM, but it will not give us the exact cause. It could still be some kind of driver issue specific to things done on HBM cards, but not the HBM itself.

January 29, 2016 | 08:02 PM - Posted by Familytechguy (not verified)

Considering latency is where hbm should shine i doubt its the culprit. Testing tonga would make or break either possibility. Tonga has same arch as fiji (colour compression) but uses gddr.

January 30, 2016 | 01:59 AM - Posted by Anonymous (not verified)

I said that is seems unlikely. It does have different characteristics though.

January 31, 2016 | 08:10 AM - Posted by cowie (not verified)

latency Is a big problem right now for hmb

January 31, 2016 | 11:40 PM - Posted by Anonymous (not verified)

I don't know if it is a big problem. It still has buffers to store the currently opened row, like any DRAM device. Accesses within the same row will have low latency. There is higher latency when you need to close the current row and and open another. This is the same with any DRAM device, but due to the width of the bus, a row can be read very quickly with HBM, even though it may be larger. I would think that this would fall into the same catagory as the other effects. With so much data to be read per frame, any latency issues should show up in every frame, reducing the overall frame rate rather than introducing a stutter every couple hundred fames.

January 29, 2016 | 08:11 PM - Posted by Familytechguy (not verified)

And i don't see why you would expect compression to cause issues in more steady manner. I view it like vsync. If it can only process at 5/6 the speed of memory it would hicup every 6 seconds especially if on the same clock multiplier.

January 30, 2016 | 01:49 AM - Posted by Anonymous (not verified)

The compression that was added with GCN 1.2 is lossless delta color compression. This isn't that high of compression so the hardware to do the work is not that complicated. I would doubt that decompression and compression hardware on chip would be slower than memory. Also, how much data do you think that the GPU has to read for every frame? You are generating a frame sixty times a second (hopefully) and the GPU has to read a massive amount of data for every frame. If the decompression system couldn't keep up, it would increase the time for all frames. It wouldn't just hiccup once ever few hundred frames.

January 29, 2016 | 07:26 PM - Posted by SubTract

I Think you're on to something.
I cant find where now, I remember seeing somewhere yesterday that it required a 3GB AMD card but only a 1GB nVidia card.
(I thought it was part of the steam forum official support post, But unless it's been edited it wasn't) I don't know how much validity there was for this but it would be interesting to find out for sure.

February 1, 2016 | 06:23 PM - Posted by zMeul (not verified)

AMD just released the Tomb Raider optimized driver: http://support.amd.com/en-us/kb-articles/Pages/AMD-Radeon-Software-Crims...

time to redo those benchmarks

February 6, 2016 | 04:14 PM - Posted by kenjo

So how is the frametime meassured ? I guess its in vsync off mode or ??

The thing is that I always play in vsync on mode as I can't stand the tearing effect and always try to set it up so that I never drop below 60. But if this spikes in time exist even in that mode it means that I have to lower the quality/effects used so much that I in practice only use like 50% of what the card really can do to stay above 60FPS all the time.

Its hugely annoying to have a frame skip every 5 second.

January 29, 2016 | 05:36 PM - Posted by Anonymous (not verified)

Any difference running the game on Windows 7 Vs. Windows 10?
Also any idea when they will patch the game for DX12?

January 29, 2016 | 06:36 PM - Posted by ThE_MarD

Heyyo,

Tbh I don't think Square Enix nor Crystal Dynamics nor Nixxies (the devs who ported it to PC) ever promised DirectX 12 patch for the PC version of the game... so unless that changes soon? I doubt it. :\

I've read that the Xbox One uses some DirectX 12-ish features (but doesn't actually use Dx12), but the XBOne has had features like DirectX 12 for quite some time.

Afaik? Only the new Hitman game has promised a Dx12 update.

January 29, 2016 | 06:54 PM - Posted by Anonymous (not verified)

Thanks for the info.
I was hoping for DX 12, but it looks like I'll stick with Windows 7 on my gaming PC then.

January 29, 2016 | 08:40 PM - Posted by arbiter

I seem to remember some story about being DX12 as well. I am running it on win10 and its been pretty much 0 problems cept game has crashed once during game play and had it crash when loading my save.

Consoles have pretty muched used a closer to hardware API for a while. Even windows at one point back in 90's used one as well but having software layer between makes machine much more stable as 9x machines would just BSOD crash.

January 29, 2016 | 07:24 PM - Posted by Anonymous (not verified)

Since TR is very CPU efficient and looks great already, I don't see what the point of doing DX12 would be. That would cost money to do, and create issues for some people since DX12 is pretty new it won't obviously be flawless.

TR is basically the poster child for why NOT to bother with DX12.

Don't forget when a game is designed WITHOUT DX12 in mind porting to DX12 would only optimize a few things (like CPU usage). There's a lot of things DX12 can do aside from that but you need to design the game mostly from scratch to do DX12 right.

January 30, 2016 | 02:32 AM - Posted by RockYouLikeAFurycane (not verified)

But they did actually bother with DX12. In the game folder, a dll file relating to 'D3D12' was found by PCPer's German neighbour: http://www.pcgameshardware.de/Rise-of-the-Tomb-Raider-Spiel-54451/Specia...

This doesn't guarantee a DX12 patch of course, but if they never considered that, these files wouldn't be present.

January 30, 2016 | 06:51 AM - Posted by Majestic (not verified)

Have you actually played the game? I hit frequency CPU bottlenecks on a 4,2ghz 4670K. Yes, it's across all cores so the code has some nice multithreading.

But it's nothing like the old game that could run on a Core2Duo basically. It's very CPU heavy this time 'round.

January 30, 2016 | 06:51 AM - Posted by Majestic (not verified)

frequent*

February 6, 2016 | 10:21 AM - Posted by Polycrastinator (not verified)

This. I still run a 2600K, and I see frequent frame drops and stuttering at 4.2, but far fewer at 4.4. Running the game at modified high presets on a 780 at 1080p (pixel doubling on a 4K screen, FWIW).

January 30, 2016 | 04:17 PM - Posted by Anonymous (not verified)

Found this:

"Originally Posted by Szaby59 View Post
There was a stream where they mentioned currently the DX11 code runs better, until they can't achieve better user experience with DX12 they will not patch it to the game..."

http://forums.guru3d.com/showthread.php?t=405497&page=4

So DX12 may be possible later.

January 29, 2016 | 05:58 PM - Posted by funandjam

Definitely a game I'll get in Steam's summer sale. Judging by the results for 1440 and the cards used, my guesstimation says I'll get decent performance running at high settings at 1080 using my 7970.

January 29, 2016 | 06:00 PM - Posted by j nev (not verified)

You guys still don't have a 390X to test with, nor did you ever review such a card.

January 29, 2016 | 06:04 PM - Posted by Jeremy Hellstrom

You are correct sir

January 29, 2016 | 06:41 PM - Posted by arbiter

if you want results for 390x, just look for 290x 8gb card. They both are pretty much identical cept for 390x has a small clock bump.

January 29, 2016 | 06:16 PM - Posted by Anonymous (not verified)

This is another game that highlights how good the R9 390 is

January 30, 2016 | 06:03 AM - Posted by Majestic (not verified)

The thing is though, the tests look a little different when that GTX 980 is running 1530/8000.

Also, if you have "only" 8GB of system ram and a GTX 980, very-high textures causes slowdowns. As it will use up both your VRAM and system ram to the max and start swapping to the SSD.

January 31, 2016 | 11:47 PM - Posted by Anonymous (not verified)

If you are going to run with really high resolution textures, then you will need more system RAM in addition to more graphics memory. If you are running an 8 GB graphics card, then you probably should be running 16 GB of system memory. Almost everything in graphics memory goes through system memory on the way there. It seems ridiculous if you are trying to run an 8 GB card on 4 GB of system memory or something.

January 29, 2016 | 06:33 PM - Posted by Grnbd (not verified)

Looks like the R9 Fury and Nano keep up pretty well with Nvidia, but this article shows the drawback to choosing the AMD option is the driver support.

January 30, 2016 | 06:05 AM - Posted by Majestic (not verified)

Isn't it always?

Also, @Ryan. You would serve the public well by retesting this (maybe not to the same extent) in the level where you are in Jacob's town. I feel like the 390, 980 and 970 will have a very different relationship...

January 29, 2016 | 06:55 PM - Posted by Dr_Orgo

Performance and at release driver support looks about as expected. The R9 390 results definitely stood out. Price/performance looks really nice on it if someone needed a GPU right now.

Not sure if I'll buy it now or wait to play it after the next round of GPUs are released.

January 29, 2016 | 07:20 PM - Posted by arbiter

i have gtx980 using very high preset @ 1080. Only noticed a handful of times it dropped below 60fps. gpu load is usally 70-90+%, temp only hitting 71-72c. (eVGA gtx980 Superclocked ACX 2.0)

January 29, 2016 | 08:08 PM - Posted by Zealotki11er (not verified)

Not sure why you guys wanted to do such testing with no AMD drivers for this game. Keep it up.

January 29, 2016 | 08:11 PM - Posted by arbiter

Problem i think about "no amd drivers" is when will amd drivers be out? that is the problem/question. I am sure they will revisit it when they finally come out.

January 30, 2016 | 02:07 AM - Posted by Anonymous (not verified)

It is ridiculous to have to optimize the drivers for every game. Hopefully DX12 will put an end to that eventually.

January 31, 2016 | 02:54 PM - Posted by Anonymous (not verified)

Nvidia pays title=early access for nVidia for driver support

January 29, 2016 | 08:13 PM - Posted by Anonymous (not verified)

GURU3D confirmed AMD issues are related to the CPU sore count. If more than 1 CPU core is active, issues come up.

This looks like a game bug, maybe AMD can hack the game with the drivers to make it work properly.

January 29, 2016 | 08:26 PM - Posted by arbiter

um wait a sec if its a game bug then why doesn't it show up in nvidia side? If it was to do with core count then nivida would show same problem. The claim doesn't really make sense.

January 30, 2016 | 02:10 AM - Posted by Anonymous (not verified)

It doesn't even show up on other AMD cards. You can have weird race conditions pop up with multiple CPUs, but it seems unlikely that such a bug would only effect Fury cards and no others.

January 30, 2016 | 08:03 PM - Posted by arbiter

They its not a bug in the game, its a bug in the driver amd that isn't reacting well to something. so its really up to amd to fix.

January 31, 2016 | 05:59 PM - Posted by Anonymous (not verified)

You sound so certain.

Nixxues but a statement out the 29th on Steam that its investigating low performance issues.

I'm going to side with the people who made the port rather then some poster with no cred.

January 29, 2016 | 09:10 PM - Posted by Anonymous (not verified)

Ryan,

Would it not be better to test "game reviews" on minimal & recommended setups. Testing these games on high end systems really don't give much insight to what to expect for the majority. I mean using a system with a base of 16GB is rather silly given people will likely be in the rage of the 6GB to 8GB as suggested.

These review setups look like they cost 3 to 4 times more then an X-Box. So do the added visual justify that expense?

January 29, 2016 | 09:37 PM - Posted by Anonymous (not verified)

OCZ 256 SSD = $193
ADATA 512 SSD = $184
16GB Ram = $130
Motherboard = $479
i7 5960X = $1,095
AX 1500i PSU = $392
Windows 10 = $119

Total = $2592

Still missing case, cooler, fans keyboard & mouse plus the cost of a GPUs that were tested.

X-Box One GoW:UE bundle = $301
RotTR = $39

Total = $340

February 13, 2016 | 10:07 PM - Posted by Anonymous (not verified)

First of all, thats a stupid comparison. It's a test bed that is meant to eliminate all bottlenecks when comparing GPU's. You probably only need about 1300 if not less, even with a 980 or 390x, to run RotTR and high settings at 1080p, all the time (unlike Xbone which drops resolution for cut scenes).

And, you get to enjoy the game without massive slowdowns during combat, tearing and capped 30fps on the Xbone.

January 29, 2016 | 11:05 PM - Posted by arbiter

Problem with what you propose and ask about doing. Every game has different minimal setups and recommended. Time and money involved to get that hardware to test with is not worth the money. Reason they use super high end cpu and board in gpu test bench is to eliminate the cpu and memory as a bottleneck as much as possible so the GPU that is being tested is the weakest link not cpu.

January 30, 2016 | 12:26 AM - Posted by Anonymous (not verified)

The problem with your proposal is not everyone is running said hardware and test benches like these are being used by the -1%. Just look at the steam hardware survey.

15% of users only have 16GB of ram. The majority of users are using 4GB 21% or 8GB 31%. almost as many users still have 3GB then there is 12GB or above user.

0.3% of people use 8core CPUs. The majority are on 2core 48% and 4core at 44%

8% have a GPU with 4GB or higher. The majority are running 2GB

No, this doesn't even reflect real world case. What I was asking was for it to reflect minimal and recommended case the companies making these game put forth to see if they play smooth as they are advertise and is the added investment of going up a tier to a GPU is worth the visual quality if the hardware its recommended allows it to.

January 30, 2016 | 12:29 AM - Posted by Anonymous (not verified)

Ooops..

15% of users have 12GB or above not 16GB of ram.

January 29, 2016 | 09:11 PM - Posted by drbaltazar (not verified)

Yep either a bad compression system or it isnt fully optimised yet. Amd should have used fast fourier transform 2014 to start with. But them starting from scratch is a bitch

January 30, 2016 | 02:15 AM - Posted by Anonymous (not verified)

If it is the compression system, then wouldn't this show up on other games tested on Fury cards?

January 29, 2016 | 10:17 PM - Posted by Anonymous (not verified)

You should add easy to read framerate averages (e.g. Fury X - 4k, 60.1 FPS) etc for the retards like me who can't properly read anything but basic bar graphs.

January 29, 2016 | 11:14 PM - Posted by arbiter

doing avg like that doesn't really show much cause game could have massive fps spike at 1 point, to give them a avg fps boost when rest of time its lower.

January 30, 2016 | 12:13 PM - Posted by Anonymous (not verified)

Ok, so you disabled HBAO+ because you have fear of criticism of AMD-fanboys, but you didn't disable "purehair" effect that was an AMD-effect that uses huge mem and have a very high impact in performance, and in its moment was the center of many controversies, as an effect that damaged the performance in all nvidia cards, that needed more than a year to run but never runned as equal with both gpumakers. Now, with both sides of gpus, the technigue has very worse impact than HBAO+ in fps, yet.

HBAO+ was implemented in many games now, and in all of them showed a very neutral nature about performance and all the gpu makers. This was tested by many other sites, but you decide to ignore that.

You remembered to say that the game is now a TWIMTBP, but you "forgot" to say nothing about that this game, now, implements AMD-effects as heritage of the previous one, Tomb Raider 2013, a game that you forgot to say that was a Gaming Evolved one, too.

Biased is your first nature, $$$ir.

Don't use bad excuses for a biased review about configs, it's not sustainable your argumentation.

January 31, 2016 | 11:05 AM - Posted by Ryan Shrout

If nothing else, I'm glad your comment is here so I have as many "you are biased to AMD" comments as I do "you are biased to NVIDIA" comments. :)

January 31, 2016 | 11:21 PM - Posted by Speely

TressFX (which you're saying was predecessor to PureHair) had one "controversy" - when it was used in "Tomb Raider 2013". The "controversy" was that, when Nvidia users turned on TressFX, they experienced a large performance hit.

(We know it was just the one "controversy" because, whenever there's an article about something GameWorks wrecking performance on AMD cards, some Nvidia fan will eventually lay forth the argument that, "If TressFX is so much better, why doesn't it get used in anything except Tomb Raider?")

When this (almost always) happens in an Nvidia-sponsored game on AMD hardware, Nvidia fans (like, I'm presuming, yourself) frequently respond with, "Either turn it off, or stop being cheap and go buy an Nvidia card so you can run it."

May I first suggest that you either turn PureHair off, or go buy an AMD card so you can use it.

Second, I'd like to point out to you that the "controversy" ended after less than a week, when Nvidia went and got the source code for TressFX (which was, and is, open-source, which meant they could actually look at the source code) and optimized a new set of drivers to take better advantage of it. (Also note that AMD does not have this option with libraries like HairWorks, closed "black box" libraries that nobody but Nvidia gets to see and, therefore, optimize for.)

The end result? TressFX suddenly ran BETTER on Nvidia cards than AMD cards. I guess you just "forgot" to mention that point.

Now is when I ask you to cite the source that leads you to believe that PureHair "uses huge mem (sic) and have (sic) a very high impact in (sic) performance", and that "the technigue (sic) has very worse (sic) impact than HBAO+ in fps".

Finally, PureHair (an "AMD-effect") is open-source as well. Nvidia can look at the source code and optimize for it as they see fit, and as they sponsored the game (and paid a lot of money to the developers to do so, which is why it's labelled as a TWIMTBP game) and as they had Day 1 drivers ready to go, the question I would pose is, why didn't Nvidia optimize for PureHair? They can. They have everything they need and they've had it for probably months now. But NOT optimizing for it allows them to turn around and complain about AMD tech not working well on Nvidia cards, and thereby attempt to hide the fact that they've been using their own (proprietary) libraries to sabotage AMD performance for years by saying, "Nuh uh! See! Theirs does it to us, too!"

But don't worry. In a week or so, there will be yet another driver update from Nvidia, PureHair will work great on their cards, and the veritable army of Nvidia fanboys will go back to calling people not like them "peasants".

Don't use bad excuses for a biased complaint about a review. It makes your argument unsustainable.

February 4, 2016 | 10:31 AM - Posted by Anonymous (not verified)

only one correction sir in your last paragraph. it will work great ONLY on 900 series.

February 4, 2016 | 10:26 AM - Posted by Anonymous (not verified)

nvidia needed only THREE DAYS to publish a driver that was fixing geforce performance you moron, not a year.

January 30, 2016 | 01:12 PM - Posted by Alex_Volk (not verified)

"an AMD-effect that uses huge mem"

It doesn't

"a very high impact in performance"

LOL It doesn't UNLIKE craphair from nvidia. If you don't believe me then go to geforce.com and check nvidias guide lol

So who is fanboy?

January 30, 2016 | 02:42 PM - Posted by eismcsquared (not verified)

From my experience with the game so far with my i7 6700K, 980 Ti SLI system, the Very High texture setting had by far the most detrimental effect on my frame rate (especially outdoors) even though my 980 Tis have 6GB of framebuffer. My VRAM was pretty much pegged at 6GB with the textures at Very High which resulted in framerates of 40-50 with plenty of dips.

With the textures set to High instead I get ~4GB of VRAM use and pretty much constant 60 FPS except in a few rare instances (this is with all the other settings at Very High). I'm also using a custom SLI profile that allows me to pretty much max out usage on both my cards. Bottom line, 4K with Very High textures and other high settings is basically a no no unless you have some Titan Xs. Another reason why I wish the 980 Ti had 8GB of VRAM instead of 6GB (yes I realize that this is not very feasible given the memory interface being used by GM200).

January 30, 2016 | 08:04 PM - Posted by arbiter

my 980 using very high settings at 1080, pegs my card pretty much 4gb used.

January 30, 2016 | 09:15 PM - Posted by eismcsquared (not verified)

According to Geforce.com's guide, for Very High Settings at 1080p they recommend a card with 6GB of VRAM. Also it stated that prolonged sessions at 4K with Very High settings can see VRAM use above 10GB, which means this is one of the few games that actually benefits from a Titan X if u're gonna play in 4K.

January 30, 2016 | 11:16 PM - Posted by tatakai

question is when the spikes occur and if they make a difference. If its during the animated segments where she's reacting to broken path etc I dont think so. I see some stutter during the transitions there

January 31, 2016 | 11:04 AM - Posted by Ryan Shrout

No, you can see the gameplay run through in the video embedded on the story. You can see it happens during normal gameplay as well.

January 31, 2016 | 03:02 PM - Posted by Anonymous (not verified)

You failed to mention this is an nVidia title which means they probably got to optimize and work with developers a lot more prior to launch.
The fact AMD is so close in perf without it being their title is a good sign.
It'd be interesting to see the perf side by side when a new driver hits.

February 3, 2016 | 08:26 AM - Posted by Maonayze (not verified)

Yes, I have a Fury Tri-X and it indeed does stutter in places during cutscenes and gameplay. I am currently on the Soviet Level now and it ain't pretty. I hope it is just poor drivers and that they will eventually get them fixed.

I have heard that the original reboot (Tomb Raider 2013) was also like this until both camps (Nvidia/AMD) had a few drivers to fix the issues...is this true? I only picked up TR2013 about a year after it's release when it ran very well indeed.

I also thought that this review was fair and a good look into the performance and issues on some cards. It seems those frame spikes on AMD Fiji Cards need to be sorted so that myself and Ms Croft can have a smooth relationship.

Thank You Ryan Shrout :-)

January 31, 2016 | 12:39 AM - Posted by Anonymous (not verified)

Great test, but horrible choice of colors for the charts. They are very hard to differentiate.

January 31, 2016 | 12:46 AM - Posted by Albert89 (not verified)

Nvidia cards are always going to have an advantage since this game is DX11. No surprise which GPU sponsor pays the bills at PCper. When I want biased reviews then I come to PCper !

January 31, 2016 | 06:10 AM - Posted by Majestic (not verified)

[whine] this review does not conform my pre-existing bias, therefore i'm going to project onto PCper my personal bias and blame them for AMD's shortcomings towards drivers. [/whine]

You're the biased person man. Get a mirror.

January 31, 2016 | 09:01 AM - Posted by Batismul (not verified)

Hahaha, rly?

January 31, 2016 | 11:03 AM - Posted by Ryan Shrout

So wait, am I biased or is it just better because of DX11? You make no sense.

January 31, 2016 | 01:34 PM - Posted by Anonymous (not verified)

You not biased Ryan. To the point YES, but not biased.

January 31, 2016 | 11:16 PM - Posted by Speely

As an AMD fan and user, I have found PCPer to be probably the least biased out of all of the tech sites that I read.

You wanna see some tech press bias? Go read some articles on Fudzilla.

January 31, 2016 | 06:44 AM - Posted by tomoyo (not verified)

There is this rumor that you can buy it at $20 in russia,and play it elsewhere via family sharing.Really?

February 1, 2016 | 05:07 AM - Posted by Anonymous (not verified)

Nixxes have stated on the steam forums:

"Also note that textures at Very High requires over 4GB of VRAM, and using this on cards with 4GB or less can cause extreme stuttering during gameplay or cinematics."

http://steamcommunity.com/app/391220/discussions/0/451852225134000777/

This may explain the frame time spikes that were seen.

February 3, 2016 | 08:34 AM - Posted by Maonayze (not verified)

Ah....Maybe I will try it on high instead of Very High. If this fixes it I will be happy. But a little sad because of the 4GB limit issue. Oh well...bring on the 8GB HBM2 cards in the summer...or there abouts. :-)

February 1, 2016 | 08:22 AM - Posted by Genova84

Running the game great in 4k with the sli hack for my twin 980 tis. Finally a game that actually uses them fully! Well, Far Cry 4 did a good job too, but the lack of sli support recently was getting me salty.

February 1, 2016 | 11:21 AM - Posted by Patrick3D (not verified)

Is PCPer not going to do 1080P benchmarks anymore? Few people have 1440 and even fewer have 4K (1.28% and 0.7% according to the last Steam Survey).

February 1, 2016 | 12:09 PM - Posted by Anonymous (not verified)

While beautiful, this is the most VRAM hungry title I have every used! I cranked everything up, setting-by-setting, just to see what would happen. No complaints on performance. I never saw a dip below 60 fps last for more than a split second, and it only went down to 57 fps (only saw this twice during an hour of play).

But what was surprising was 8.2 GB of VRAM usage at 1080p - that's nutty! I can't say I'm entirely surprised given how amazing the game looks, but I'm surprised that my Titan X is essentially necessary to max out this game.

February 1, 2016 | 02:56 PM - Posted by Anonymous (not verified)

Funny. I think that it would be better if the game used the whole 12GB of VRAM. What is the benefit of unused VRAM to you?

February 2, 2016 | 11:31 AM - Posted by Behrouz (not verified)

Hi Ryan.AMD Crismon 16.1.1 is Out.Please Rerun Benchmark.Thanks

February 3, 2016 | 12:53 PM - Posted by Anonymous (not verified)

Re-Run it NOW !!!
.
.
.
.
.
.
.
.
.
.
.
please..

February 6, 2016 | 05:06 PM - Posted by Bryan (not verified)

Test Geothermal Valley / Soviet Installation or gtfo.

February 6, 2016 | 09:48 PM - Posted by Anonymous (not verified)

So for the updated article, you decided to showcase Nvidia's best performing card, and AMD's weirdest performing card and nothing else? Are you serious? You should've thrown in the 970 and 390 as well.

February 7, 2016 | 01:06 AM - Posted by Anonymouse (not verified)

Yea, WTF Ryan? Your lil Miss will cry shame. She will NOT be proud of daddy on this one.

If the 980 Ti (non-SLI) was tested, then surely your single Fury X should have been - if only to see the difference, if any. Its understandable that they cant all be re-tested (work load, time constraints, etc), but we're not talking about Peggle Nights here; this is a huge title that challenges the top tier cards. So those should be the focus.

February 7, 2016 | 08:23 AM - Posted by Anonymous (not verified)

The difference between Nano and Fury X is so small that it should not be a factor when choosing between AMD and Nvidia for dual cards setup. You can easily add those 5-10% and calculate Fury X perf if you want. And there are not too many Nano benchmarks out there - I always love to see them :)

February 8, 2016 | 11:06 PM - Posted by Anonymous (not verified)

The nano could be throttling in some cases due to the strict thermal limitations. I would have preferred that they use the Fury or Fury X instead, but if they don't have dual Fury/X cards available, then that just wasn't an option.

February 7, 2016 | 04:00 AM - Posted by holyneo

Are you running reference 980ti? Because my GIGABYTE G1 Gaming card is getting better results, just curious.

Since you favor AMD over Nvidia I figured you would run a reference card with a pink chart color,so it doesn't make AMD Fury X look so bad....hahahah just playing, but seriously my card kicks ass!!!

Yes, the pink line is hard to see on some phones. :P

February 7, 2016 | 06:08 PM - Posted by Anonymous (not verified)

I keep forgetting that any review that is not an immediately glowingly perfectly admiringly wonderfully perfect review of an Nvidia product IMMEDIATELY MEANS THE REVIEWER IS AN AMD FANBOY.

Jesus. Some of you Nvidia people are pathetic.

February 9, 2016 | 03:23 AM - Posted by ThorAxe

I think holyneo was joking.....:)

February 9, 2016 | 05:08 PM - Posted by holyneo

I was, glad you get it.

;)

Sorry for those that didn't, well not really. Take a deep breath, slowly walk away from the computer, ponder my reply till a smile forms onto your face.

February 10, 2016 | 07:27 PM - Posted by MarkieG84 (not verified)

Can you please run the frame time test with R9 280 or 280x? That would help figure out if it is GCN 1.2 or 4gb vram that is causing the spikes with fury x and nano. I know 980 doesn't have them but that doesn't mean much because of so many different variable such as drivers.

February 10, 2016 | 08:18 PM - Posted by MarkieG84 (not verified)

Sorry I meant 290 or 290x

February 12, 2016 | 04:20 AM - Posted by Anonymous (not verified)

Can you confirm that R9 Nano was not throttling during the testing?

February 18, 2016 | 03:17 PM - Posted by MarkieG84 (not verified)

I own nano crossfire and I under clock to 900mhz to keep the top card from throttling. Left at stock the spikes are a bit worse/often. Overclocking is out of the question. There are still spikes at a steady 900mhz. I would really like to see benchmarks with r9 290x crossfire vs r9 390x crossfire to test if the spikes are based on the ram. Obviously drivers could improve the nano/fury crossfire either way because 980 sli does not have the same spikes.

March 18, 2016 | 12:16 PM - Posted by ryanbush81

The comments are too funny!

I feel like this is a two party system, NVidia or AMD! So I'm going to vote Intel Integrated Graphics are better than both combined. Yea for independents!

July 8, 2016 | 01:39 PM - Posted by Titan_Y (not verified)

Would be great to see this revisited now that patch 1.0.668.1 (Patch #7) is out with improved async compute support. The addition of Polaris benches would be valuable too.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.