Review Index:

Civilization: Beyond Earth Performance: Maxwell vs. Hawaii, DX11 vs. Mantle

Manufacturer: Firaxis

A Civ for a New Generation

Turn-based strategy games have long been defined by the Civilization series. Civ 5 took up hours and hours of the PC Perspective team's non-working hours (and likely the working ones too) and it looks like the new Civilization: Beyond Earth has the chance to do the same. Early reviews of the game from Gamespot, IGN, and Polygon are quite positive, and that's great news for a PC-only release; they can sometimes get overlooked in the games' media.

For us, the game offers an interesting opportunity to discuss performance. Beyond Earth is definitely going to be more CPU-bound than the other games that we tend to use in our benchmark suite, but the fact that this game is new, shiny, and even has a Mantle implementation (AMD's custom API) makes interesting for at least a look at the current state of performance. Both NVIDIA and AMD sent have released drivers with specific optimization for Beyond Earth as well. This game is likely to be popular and it deserves the attention it gets.

Testing Process

Civilization: Beyond Earth, a turn-based strategy game that can take a very long time to complete, ships with an integrated benchmark mode to help users and the industry test performance under different settings and hardware configurations. To enable it, you simple add "-benchmark results.csv" to the Steam game launch options and then start up the game normally. Rather than taking you to the main menu, you'll be transported into a view of a map that represents a somewhat typical gaming state for a long term session. The game will use the last settings you ran the game at to measure your system's performance, without the modified launch options, so be sure to configure that before you prepare to benchmark.

The output of this is the "result.csv" file, saved to your Steam game install root folder. In there, you'll find a list of numbers, separated by commas, representing the frame times for each frame rendering during the run. You don't get averages, a minimum, or a maximum without doing a little work. Fire up Excel or Google Docs and remember the formula:

1000 / Average (All Frame Times) = Avg FPS

It's a crude measurement that doesn't take into account any errors, spikes, or other interesting statistical data, but at least you'll have something to compare with your friends.

View Full Size

Our testing settings

Just as I have done in recent weeks with Shadow of Mordor and Sniper Elite 3, I ran some graphics cards through the testing process with Civilization: Beyond Earth. These include the GeForce GTX 980 and Radeon R9 290X only, along with SLI and CrossFire configurations. The R9 290X was run in both DX11 and Mantle.

  • Core i7-3960X
  • ASUS Rampage IV Extreme X79
  • 16GB DDR3-1600
  • GeForce GTX 980 Reference (344.48)
  • ASUS R9 290X DirectCU II (14.9.2 Beta)

Mantle Additions and Improvements

AMD is proud of this release as it introduces a few interesting things alongside the inclusion of the Mantle API.

  1. Enhanced-quality Anti-Aliasing (EQAA): Improves anti-aliasing quality by doubling the coverage samples (vs. MSAA) at each AA level. This is automatically enabled for AMD users when AA is enabled in the game.
  2. Multi-threaded command buffering: Utilizing Mantle allows a game developer to queue a much wider flow of information between the graphics card and the CPU. This communication channel is especially good for multi-core CPUs, which have historically gone underutilized in higher-level APIs. You’ll see in your testing that Mantle makes a notable difference in smoothness and performance high-draw-call late game testing.
  3. Split-frame rendering: Mantle empowers a game developer with total control of multi-GPU systems. That “total control” allows them to design an mGPU renderer that best matches the design of their game. In the case of Civilization: Beyond Earth, Firaxis has selected a split-frame rendering (SFR) subsystem. SFR eliminates the latency penalties typically encountered by AFR configurations.

EQAA is an interesting feature as it improves on the quality of MSAA (somewhat) by doubling the coverage sample count while maintaining the same color sample count as MSAA. So 4xEQAA will have 4 color samples and 8 coverage samples while 4xMSAA would have 4 of each. Interestingly, Firaxis has decided the EQAA will be enabled on Beyond Earth anytime a Radeon card is detected (running in Mantle or DX11) and AA is enabled at all. So even though in the menus you might see 4xMSAA enabled, you are actually running at 4xEQAA. For NVIDIA users, 4xMSAA means 4xMSAA. Performance differences should be negligible though, according to AMD (who would actually be "hurt" by this decision if it brought down FPS).

Continue reading our article on Civilization: Beyond Earth performance!!

The added performance capability of the multi-threaded command buffer allows the game to take better advantage of the CPU cores. This was the key tenet of the Mantle API to begin with, but we have previously focused on lower end processors as the main benefactor. In this case, AMD claims that R9 290X cards running in CrossFire might see as much as a 20% increase when going from a 6-core to an 8-core processor.

View Full Size

A reintroduction of split-frame rendering for CrossFire is perhaps the most interesting of these three Mantle additions. Even though all current SLI and CrossFire implementations are using AFR (alternate frame rendering), it wasn't always the case. When NVIDIA and ATI were bringing back the world of multi-GPU gaming, SFR and AFR were competing solutions but, as game engines got more complex, splitting the per-frame workload was problematic and we settled on AFR. Since Mantle's release though, we knew that multi-GPU processing required a lot more work by the developer to implement as it needed to be done on a per-game, per-engine basis. That can be problematic for some games and game engines, but Firaxis used this as an opportunity to offer SFR again for a specific goal. AMD's Robert Hallock explains:

Mantle empowers game developers with full control of a multi-GPU array and the ability to create or implement unique mGPU solutions that fit the needs of the game engine. In Civilization: Beyond Earth, Firaxis designed a “split-frame rendering” (SFR) subsystem. SFR divides each frame of a scene into proportional sections, and assigns a rendering slice to each GPU in AMD CrossFire™ configuration. The “master” GPU quickly receives the work of each GPU and composites the final scene for the user to see on his or her monitor.

If you don’t see 70-100% GPU scaling, that is working as intended, according to Firaxis. Civilization: Beyond Earth’s GPU-oriented workloads are not as demanding as other recent PC titles. However, Beyond Earth’s design generates a considerable amount of work in the producer thread. The producer thread tracks API calls from the game and lines them up, through the CPU, for the GPU’s consumer thread to do graphics work. This producer thread vs. consumer thread workload balance is what establishes Civilization as a CPU-sensitive title (vs. a GPU-sensitive one).

Because the game emphasizes CPU performance, the rendering workloads may not fully utilize the capacity of a high-end GPU. In essence, there is no work leftover for the second GPU. However, in cases where the GPU workload is high and a frame might take a while to render (affecting user input latency), the decision to use SFR cuts input latency in half, because there is no long AFR queue to work through. The queue is essentially one frame, each GPU handling a half. This will keep the game smooth and responsive, emphasizing playability, vs. raw frame rates.

Let me provide an example. Let’s say a frame takes 60 milliseconds to render, and you have an AFR queue depth of two frames. That means the user will experience 120ms of lag between the time they move the map and that movement is reflected on-screen. Firaxis’ decision to use SFR halves the queue down to one frame, reducing the input latency to 60ms. And because each GPU is working on half the frame, the queue is reduced by half again to just 30ms.

In this way the game will feel very smooth and responsive, because raw frame rate scaling was not the goal of this title. Smooth, playable performance was the goal. This is one of the unique approaches to mGPU that AMD has been extolling in the era of Mantle and other similar APIs.

Interesting stuff. The staff at PC Perspective is always interested in ways game developers and hardware vendors can improve game smoothness and how it feels. Firaxis had similar commentary on its blog about the release:

What does this have to do with multi-GPU?  Current multi-GPU solutions are implemented in the driver, without knowledge of, or help from, the game rendering engine.  With the limited information available drivers are almost forced to implement AFR, or Alternate Frame Rendering, which is an approach where individual frames are rendered entirely on a single GPU.  By alternating the GPU used each frame, rendering for a given frame can be overlapped with rendering of previous frames, resulting in higher overall frame rates.  The cost, however, is an extra frame of latency for each GPU past the first one.  This means that AFR multi-GPU solutions have worse response time than a single GPU capable of similar frame rates.

Rather than trying to maximize frame rates while lowering quality [with AFR], we asked ourselves a question: How fast can we get a dual-GPU solution without lowering quality at all?  In order to answer this question, we implemented a split-screen (SFR) multi-GPU solution for the Mantle version of the game.  Unlike AFR, SFR breaks a single frame into multiple parts, one per GPU, and processes the parts in parallel, gathering them into the final image at the end of the frame.  As you might expect, SFR has very different characteristics than AFR, and our choice was heavily motivated by our design of the Civilization rendering engine, which fits the more demanding requirements of SFR well.  Playing the game with SFR enabled will provide exactly the same quality of experience as playing with a single, more powerful GPU.

This feature is only available when running Beyond Earth on a Mantle-capable graphics and with the game in Mantle mode. One note that found its way to us somewhat late: ensure you are testing Mantle mGPU by setting “Enable MGPU” to 1 in the graphics initialization file. That file is found in the My Documents/My Games folder. Apparently, enabling CrossFire in the driver control panel isn't quite enough to get the job done.

October 23, 2014 | 10:12 PM - Posted by Anonymous (not verified)

brace yourself. Winter is coming.


October 23, 2014 | 10:34 PM - Posted by Anonymous (not verified)

Thanks for the article, very informative. Seems there's a lot to see in this new game.
I'm liking this application of SFR, and I wonder if it would be possible to run two completely different GPUs in that case (like a 7970 with a 7870). There doesn't seem to be a reason why the rendering portions for each GPU need to be equal. It's also funny to see how tables have turned from just a few years ago, with SLI now giving high frame rates and Crossfire giving lower but smoother results.

October 23, 2014 | 10:40 PM - Posted by godrilla (not verified)

Amd titles bring decent gaming requirements improved performance with mantle. Nvidia titles bring crazy requirements ac unity gtx 680 minimum requirement! We need more mantle support so save pc from Sony and Microsoft.

October 23, 2014 | 11:12 PM - Posted by ThorAxe

A shame about the Crossfire performance..

October 24, 2014 | 07:41 AM - Posted by Anonymous (not verified)

I have to say I am rather VERY excited about those min framerate results of Xfire with Mantle!

SFR and good min Framerates are key for Oculus Rift.

October 24, 2014 | 04:40 AM - Posted by arbiter

AC unity push'es hardware IMO more to the limits, instead of holding back. PC's are not limited to slow low performance that consoles are.

October 24, 2014 | 01:06 PM - Posted by godrilla (not verified)

Pushes hardware? The minimum should be at least equivalent to the console hardware in terms of power, so a 7790 should be minimum not 7970! More like a bad port.

October 25, 2014 | 01:48 PM - Posted by Anonymous (not verified)

The reason for the requirement of the higher card is support for mantle. GCN cards are compatible with mantle and the 7790 does not support mantle as far as I know.

October 25, 2014 | 02:35 PM - Posted by Anonymous (not verified)

7790 does support mantle

November 6, 2014 | 03:50 PM - Posted by ppi (not verified)

It rather looks the devs overshot the status of PCs at the time of release. Having way too high minimum specs relegates the game to the following:
a) Weak initial sales;
b) If the games are super top notch, the bulk of gamers will buy them in two years time in some Steam/GoG sale
= Weak profits for the developers = lesser chance that such a game will be released in the future

I mean, I admire what the guys are doing graphical fidelity, but when just a fraction of their audience has the iron to play it on? Check the Techreport HW survey - and that's enthusiasts. And mind you, typically the "recommended specs" on the game are the actual minimum ones to play the game reasonably.

The dev's wallet does not care if they sell the game to GTX 980 owner, or 7790 owner.

Obviously, there are franchises that do depend on top notch visuals, or there would be little reason to buy those games whatsover. But I do not believe all of the nVidia supported games are that kind.

Civ benefits from good performance when scrolling around.

October 24, 2014 | 03:02 AM - Posted by Jules (not verified)


The time of Dx dominance is beginning to crumble.

October 24, 2014 | 04:41 AM - Posted by arbiter

yea right, mantle won't survive. DX12 will crush it cause it will have 1 thing mantle lack's. That is support by BOTH AMD and Nvidia.

October 24, 2014 | 07:44 AM - Posted by Anonymous (not verified)

Nothing in the tech world is static. DX is still the king, but Mantle IS making some waves for sure. DX12 will benefit everyone, but by the time it comes out Mantle will be even more widespread. As long as it is easy for devs to implement, there's no reason they would stop using it.

October 24, 2014 | 02:45 PM - Posted by Griffinhart (not verified)

Nothing in the world is static, but unless Mantle can support NVidia, Intel along side of AMD based solutions DX will remain the standard, it being the common platform.

October 25, 2014 | 01:49 PM - Posted by Anonymous (not verified)

whether or not mantle supports nvidia is nvidia's decision. It is an open API and if nvidia wanted to, they could make a driver.

December 2, 2014 | 01:59 PM - Posted by Anonymous (not verified)


AMD absolutely has not made Mantle open source yet. NVidia has no access to the code. Nor does Intel.

October 24, 2014 | 09:06 AM - Posted by Anonymous (not verified)

Have you been calling your psychic again?

October 24, 2014 | 12:37 PM - Posted by godrilla (not verified)

Lol windows 10 exclusive api is going to kill mantle thats interesting, if anything its what will make mantle more attractive, you have to have a bit of intelligence here to understand this, simple unless you are just trolling.

October 28, 2014 | 08:30 AM - Posted by Anonymous (not verified)

except for the fact that windows 10 will be a free upgrade from 8.1. go throw hate somewhere else.

October 30, 2014 | 09:02 AM - Posted by Anonymous (not verified)


Microsoft give Windows 10 away for free? What have you been smoking?!

October 27, 2014 | 02:17 AM - Posted by Sonic4Spuds

I have to agree, but perhaps not in the way you would expect. I would actually expect that there will be more games going towards openGL seeing as there is a decient push twards Linux and Mac gaming. If Microsoft wants to maintain a dominance in the market they will have to support other platforms. Perhaps devs will keep with the multi stack approach that they have been though, seeing as many of them already have to support both DX and openGL for both consoles. I do find the fact that several inde studios have chosen to use openGL exclusively with good results interesting.

October 24, 2014 | 03:48 AM - Posted by Martin Trautvetter

On the one hand, it absolutely makes sense to spend whatever game-specific optimization budget you have on the API you expect the best results with.

On the other hand, AMD will get clobbered pretty hard every time a magazine or website decides to do a head-to-head test and uses DX11.

Which sounds all very 1999 to me, and still leads me to conclude that these special APIs are only a net positive if you're already competitive when using the standard API. (at which point you might be asking yourself if your own special API is truly worth all the effort and money you put into it)

I'm just WFT? on the AA thing, shouldn't be to hard to put an extra checkbox in there, and forcing differing levels of AA depending on the GPU used seems like an odd choice to begin with.

October 24, 2014 | 04:46 AM - Posted by arbiter

Mantle in theory sounded good, but it did kinda push a but more work on the dev's to get the performance outta it like for CF. CF for mantle had to be setup in the game else didn't work. (Refer to below image as case in point) Test they did with sniper elite 3 which displayed how well properly implemented CF/SLI can scale in DX11.

October 24, 2014 | 04:46 AM - Posted by JohnGR

The minimum frame scores show the real winner here and it is 290X with Mantle.

In games that's what counts, not if you can hit 500 fps for a second.

October 24, 2014 | 07:42 AM - Posted by Anonymous (not verified)

Even more so with VR HMDs like the Oculus Rift.

October 24, 2014 | 02:57 PM - Posted by Anonymous (not verified)

lets ignore the fact of how immature Maxwell drivers are.

October 24, 2014 | 03:11 PM - Posted by Anonymous (not verified)

lol you base your opinion from one bench.
The rest 980 is ahead or on pretty much on par and this isn't even with DX12 api

And like said before driver are immature.

And almost a year in now with mantle just look at all those games

October 25, 2014 | 01:49 AM - Posted by JohnGR

Maxwell is 8 months old. Anyone remembering 750 750Ti? No?

So let's wait for the next typical driver that Nvidia will call "Wonder driver II" in huge marketing fonds.

By the way, 980 costs $550+
290X dropped close to $330. Keep paying Nvidia. Your money to throw away.

October 24, 2014 | 08:58 AM - Posted by Anonymous (not verified)

Games feels incredibly smooth and lag free with Mantle.

But the real question I have is not about FPS, but AI CPU time used, especially lategame. Does the Mantle version perform AI turns faster from wasting less CPU time attending GPU tasks? No matter how well threaded the driver, that CPU time is still wasted.

How different do the DX and Mantle versions behave lategame waiting for AI turns to end?

October 24, 2014 | 10:58 AM - Posted by Master Chen (not verified)

Damn. It looks like Huang is at it again - buying up the game developers so that they cripple the game code in the favor of noVideo under DirectCrap. Yet another "noVideo-favorable" title.

Yes yes, it does have a full Mantle 1.0 support. So what? It still doesn't put away the fact that this game is very clearly utterly biased towards noVideo.

And I'm also noting that I'm NOT an AMD and/or Radeon fanboy in any way whatsoever, I'm on a GTX 980 (Zotac AMP Extreme) right now myself, but I still utterly hate and outright LOATHE Huang's "tactics".

October 24, 2014 | 11:35 AM - Posted by Searching4Sasquatch (not verified)

Huh? Dafuk are you talking about? Civ has an exclusive marketing and bundle deal with AMD, uses Mantle, etc.

You're pissed because NVIDIA's DX11 drivers are good?

October 24, 2014 | 02:31 PM - Posted by Master Chen (not verified)

>noVideo's DirectCrap 11 drivers are good


October 24, 2014 | 12:05 PM - Posted by Shrapnol (not verified)

fraps doesnt work for this game ??? I did built in fps rating and getting max refresh of monitor! fps stuck at 75fps
on fx 8350 R9 270x 16 gb ram! runs really fast and smooth
on my mantle setup:)

October 24, 2014 | 12:11 PM - Posted by Jabbadap (not verified)

Thank you for the test, one thing I wan't to ask:

Do you plan to revisit the test when mac/steamos version with opengl4 comes out? Too bad there's no opengl renderer for windows version to get direct comparison data though.

October 24, 2014 | 02:42 PM - Posted by Edmond (not verified)

holy fucking shit...

mantle is looking VERY good now... ye, cfx could use some work, but thats an eventuality

give me freesync and im done with nvidia

October 24, 2014 | 03:17 PM - Posted by Anonymous (not verified)

yea becasue mantle shows such a bright future almost a year now look at all those games.

And im not even going to get into free-sync people with nvida already have g-sync.

So yea quit pretending to be someone you are not.

October 25, 2014 | 01:52 PM - Posted by Anonymous (not verified)

considering it is just now getting in a mature form and ANY new API takes time for adoption, I'd say it is doing very well.

October 24, 2014 | 03:49 PM - Posted by Anonymous (not verified)

Interestingly, the article says that mantle gives a smoother experience at 2560x1440, but the graph shows a constant 15-18ms bounce, which for 60Hz vsync users (e.g. most people) would mean a constant stutter

I'm assuming the tests and commentary alleviate to sync off usage?

October 25, 2014 | 09:28 AM - Posted by Ryan Shrout

I assume you are talking about the multi-GPU results? I didn't experience any stutter, but we had V-Sync disabled so that would likely alleviate stutter (in favor of tearing) but it's definitely worth looking at again.

October 25, 2014 | 09:47 AM - Posted by Anonymous (not verified)

Hi Ryan,

Yeah, sorry looks like auto correct messed with the end of my post, should say all your tests relate to vsync off

Would be interesting to get feedback on that same tests with vsync on a 60hz monitor (as that seems to be the prevailing refresh at 1440)

Obviously if 1440 and high refresh, like the swift, then less of an issue, but then gsync would probably make the Dx frametime variance less of an issue anyway

October 25, 2014 | 09:47 AM - Posted by Anonymous (not verified)

Hi Ryan,

Yeah, sorry looks like auto correct messed with the end of my post, should say all your tests relate to vsync off

Would be interesting to get feedback on that same tests with vsync on a 60hz monitor (as that seems to be the prevailing refresh at 1440)

Obviously if 1440 and high refresh, like the swift, then less of an issue, but then gsync would probably make the Dx frametime variance less of an issue anyway

October 25, 2014 | 01:39 AM - Posted by Asterra (not verified)

Hey guys. I'm going to type this up because you folks seem to be the ones who care the most about frame issues in multi-GPU solutions.

I'll start by stressing a particular point: While it's true that many gamers do not use vsync, and that there are valid reasons not to, the fact of the matter is that most people who actually care about the issues you underscore probably DO use vsync because a smooth, tearing-free experience is in effect the bare minimum expected from their expensive gaming PC. This means that useful information relevant to their chosen configuration can be difficult to glean from your charts, as you have elected to give only peripheral focus to vsync scenarios.

What can be tested under vsync conditions? The most important item is of course frame delivery time. What most people today refer to as "microstuttering" is in fact a completely separate issue from what "microstuttering" meant in its earliest iteration. Those moments of gargantuan frame drops / runts might better be labelled "macrostuttering" to differentiate. When I speak of "microstuttering", I refer to the typically regular, ever-present incongruency in frame time delivery between the two GPUs (all else being idealized), e.g. 16ms 43ms 50ms 76ms 83ms 110ms... In a vsync config, this gives the visual result not of a smooth 60fps but of two separate 30fps feeds interleaved (essentially blended) with one another, which is an atrocious dealbreaker. The problem is correctly-defined and vividly illustrated in this old article:

What I would propose is that you begin including some vsync-friendly charts (configured for 60Hz) that zoom in on some of the frame activity, giving a frame-by-frame analysis of delivery time, such as in the above link. This would have been particularly interesting in this case, with the first example of split-frame rendering since the earliest days of multi-GPU. By highlighting any lingering issues with this phenomenon, you will serve a dual purpose: First, you will put AMD and Nvidia to task in addressing this problem, which remains the #1 reason not to go multi-GPU. Second, you will highlight possible rays of hope for users frustrated at the persistent performance tradeoffs associated with multi-GPU configs. As a bonus for yourselves, you will be able to show everyone just how awful vsync is because it increases latency by some ~20ms.

October 25, 2014 | 09:36 AM - Posted by Ryan Shrout

Hi Asterra, thank you for your very good comments on the story.

I think your request for a focus on V-Sync enabled testing is worthwhile and it's honestly something I haven't spent enough time on. The only question I would have for you going forward is this: does the increase popularity of 120/144 Hz refresh displays dramatically change the interesting in multi-GPU testing?

That link to the TR report doesn't specifically mention VSync that I can tell either...?

I'm also curious how you would measure the supposed 20ms latency addition of mGPU configurations? 

October 27, 2014 | 06:12 PM - Posted by Asterra (not verified)

(I gave a somewhat long-winded reply to this but it vanished a couple of days ago. Not sure if that's a quirk of the feedback formatting or the result of a database snafu.)

October 26, 2014 | 11:25 AM - Posted by H1tman_Actua1

AMD, poor little guys! Keep trying!

Maybe they're new "supreme leader" will bring them some success.

October 26, 2014 | 08:01 PM - Posted by remon (not verified)

You mean, how a previous generation AMD card is on par, and with smoother gameplay too, than the current Nvidia one?

October 27, 2014 | 06:25 AM - Posted by Nox (not verified)

You can't deny that the new nVidia gpus will outperform the latest AMD gpus after a driver update from nVidia. As an AMD fanboy (yes i am), even I can see that. But that is not really a problem right?

For example: I'm running two 7970s in crossfire. I have been doing for over a year now. Do you want to know how much money i spent for my gpus? 450 euros. yeah. that's 100 bucks less than a single 980. Sure, the 980 will become cheaper. Let's say the price for two 7970s will be on par with a single 980. Even then, the two 7970s will easily outperform the 980, even more so with mantle supported games. I see a steady increase of mantle supported games, so they got that going for them, which is nice.

Long story short: You pay less money per FPS. Even in the high-performance-any-game-crushing segment. yup. Can't deny that I need a 850 watt power supply though whereas i'd only need a 600 watt power supply when using a 980 :(

October 26, 2014 | 11:28 AM - Posted by H1tman_Actua1

Poor little AMD

Keep trying, maybe your new supreme leader will bring you some success.

October 26, 2014 | 11:43 AM - Posted by John H (not verified)

Hey PCPer - great review & thanks for the steps to benchmark this yourself!

October 27, 2014 | 07:51 PM - Posted by wizpig64 (not verified)

"But I can't help but wonder why the DX11 results for AMD in this game falter in the way they do."

to be fair, the r9 290x costs $350 and the gtx 980 is $550, which seems pretty in-line with the single-card dx11 performance. The fact that mantle is able to bring performance up to par with the 980 is very impressive though, which makes me excited for dx12.

crossfire underperforming compared to sli though? business as usual.

looking forward to the answer regarding fraps vs fcat on page 2. if the internal timer thinks the card is taking the wrong amount of time to render a frame, wouldn't that make the internal animations run at a stuttered pace, despite the frames of those animations looking smooth on screen?

October 27, 2014 | 08:00 PM - Posted by Anonymous (not verified)

wow.. Gaming on AMD GPU is far superior to Nvidia GPU, including dual card set-ups. The Min`s are much higher on AMD also. But most importantly I would take smooth as butter over stutter and extra couple frames any day.

I will be selling my Nvidia GPU for this game, and buy AMD.. I purchased Nvidia for its smoother game play, but since last year it doesn't seem to be the case any more, and especially with all newer games being released.

October 27, 2014 | 08:03 PM - Posted by Anonymous (not verified)

What happened to the Tech Nvidia was supposed to have in its GPU`s? How come its not working, and how come Nvidia GPU`s are inferior to AMD`s GPU`s in Frame Times (Smoothness)?

AMD has come really far, and is ahead in game play experience from looking at frame times or latency.

October 28, 2014 | 04:21 AM - Posted by Anonymous (not verified)

DX 4 life !

October 28, 2014 | 03:45 PM - Posted by MoDeMFoX (not verified)

This game refused to work at the proper resolution on a BenQ XL2420TE plugged in using Display Port.

I had the newest AMD video drivers and tried it in both single card and Crossfire on R9 280x 's.

It seemed pretty fun other than the blaring issues with video.

October 28, 2014 | 04:32 PM - Posted by Alex Antonio (not verified)

Can you be more specific about the ini file for multi GPU mantle?

October 28, 2014 | 04:32 PM - Posted by Alex Antonio (not verified)

Can you be more specific about the ini file for multi GPU mantle?

November 4, 2014 | 06:25 PM - Posted by RagingCain (not verified)

I created another Frame Time Analyzer for Civilization: Beyond Earth, so you can guys don't have to manually calculate FPS, draw graphs, check out frame variance etc.

Currently on v0.0.4!

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.