Review Index:
Feedback

Borderlands 2 PhysX Performance and PhysX Comparison - GTX 680 and HD 7970

Author:
Manufacturer: Various

PhysX Settings Comparison

Borderlands 2 is a hell of a game; we actually ran a 4+ hour live event on launch day to celebrate its release and played it after our podcast that week as well.  When big PC releases occur we usually like to take a look at performance of the game on a few graphics cards as well to see how NVIDIA and AMD cards stack up.  Interestingly, for this title, PhysX technology was brought up again and NVIDIA was widely pushing it as a great example of implementation of the GPU-accelerated physics engine.

What you may find unique in Borderlands 2 is that the game actually allows you to enabled PhysX features at Low, Medium and High settings, with either NVIDIA or AMD Radeon graphics cards installed in your system.  In past titles, like Batman: Arkham City and Mafia II, PhysX was only able to be enabled (or at least at higher settings) if you had an NVIDIA card.  Many gamers that used AMD cards saw this as a slight and we tended to agree.  But since we could enable it with a Radeon card installed, we were curious to see what the results would be.

View Full Size

Of course, don't expect the PhysX effects to be able to utilize the Radeon GPU for acceleration...

Borderlands 2 PhysX Settings Comparison

The first thing we wanted to learn was just how much difference you would see by moving from Low (the lowest setting, there is no "off") to Medium and then to High.  The effects were identical on both AMD and NVIDIA cards and we made a short video here to demonstrate the changes in settings.

Continue reading our article that compares PhysX settings on AMD and NVIDIA GPUs!!

One thing you may not right away - we are using an upcoming benchmark scene that was developed by Gearbox and sent to us by NVIDIA.  I know, I know, some of you are going to complain about using a benchmark provided by NVIDIA, but I understand that this will soon find its way into the main game via an upcoming patch so I feel comfortable utilizing it now.  The automatic run through uses real-time game engine code, AI implementation and effects of gun fire, exploding barrels and more, just as we saw in our 4 hours and more of game time.

Another interesting note is that this early version of the benchmark has two additional scenes when PhysX is set to the High preset - a bug according to our contacts at NVIDIA.  Annoying, but not that big of a deal since it occurred on both AMD and NVIDIA based configurations.

View Full Size

From the video above you can see that the difference in game play from having PhysX at Low and Medium is pretty substantial - there are particles that come up from the ground when you shoot it, fluids that come out of exploding barrels, etc.  The changes in moving from Medium to High are quite a bit more subtle and include additional fluid interactions and more particles in general. 

Initially, NVIDIA told us that the PhysX settings were utilizing the CPU when a non-NVIDIA graphics card was installed, but as we looked at our results (on the next page) that doesn't actually appear to be the case. 

Testing Setup and Configuration

For our quick testing today we were just going to run one AMD card and one NVIDIA card.  In this case we used the Radeon HD 7970 GHz Edition 3GB card and the NVIDIA GeForce GTX 680 2GB - the top single GPU options from each vendor.  Driver versions were 12.8 and 306.23 respectively. 

View Full Size

Using FRAPS we recorded 80 seconds (almost all) of the benchmark map to get our average, minimum and maximum frame rates as well as our frame rates over time.  Because we expected PhysX to use some additional CPU horsepower on the AMD card than the NVIDIA card, we used the Windows Performance Monitor to record total CPU utilization during the benchmark as well. 

We ran Borderlands 2 at 1920x1080, maximum image quality settings with only the PhysX configuration changing from Low, Medium and to High.

October 1, 2012 | 02:42 PM - Posted by JC (not verified)

Does nvidia still support using an old graphics card (ie. a 9800GT) in a secondary slot as a PhysX accelerator? I wonder if that would help the AMD case?

October 1, 2012 | 04:32 PM - Posted by Yuri (not verified)

you can do that. See this page for more details:
http://physxinfo.com/wiki/Hybrid_PhysX

October 1, 2012 | 05:10 PM - Posted by JC (not verified)

I guess nvidia never really "supports" this as a driver hack is required, but it would be interesting to see the benefit of the added GPU physics processing.

October 1, 2012 | 10:25 PM - Posted by Arb (not verified)

in the nvidia control panel you can tell it what card to use for physx.

October 9, 2012 | 12:41 AM - Posted by Corazon (not verified)

Excellent goods from you, man. I've take into accout your stuff prior to and you are just extremely great. I really like what you've received right
here, really like what you are saying and the way in which during which you say
it. You make it entertaining and you continue to care for to stay it sensible.
I can't wait to read much more from you. This is actually a terrific site.

Here is my web-site: bunki munki

March 1, 2013 | 03:25 AM - Posted by Srivathsa (not verified)

Yes, if your motherboard got two PCI Express x 16 slots then you can use the second card dedicated for PhysX.

Yet again, if you already got a very powerful NVIDIA card (above 300 CUDA cores), then we dont need a dedicated PhysX card. Because for PhysX a card with 16 CUDA cores is needed.

October 1, 2012 | 05:51 PM - Posted by Anonymous (not verified)

I'm playing Borderlands 2 on a HD6970 at 2560x1600 with everything maxed. I have a GT440 I use just for physx (on high) and in some limited testing I've not seen the nvidia card go over 50% usage even during a big fight. Should be able to get by fine on a GT430 which you can regularly get for $20 on sale

October 2, 2012 | 04:51 AM - Posted by MFpterodactyl (not verified)

Seems about right, half a GT 440 worth of computation. There is no reason a modern i7 can't handle that if properly threaded. It's cliche to say this by now, but Nvidia is playing really really dirty. I understand not spending resources getting their tech optimized for their competition's GPUs, but they purposefully crippled it on CPUs.

Shame on you Nvidia, and this is coming from the guy with triple 680s.

October 2, 2012 | 06:56 AM - Posted by Lord Binky (not verified)

The problem is going from a highly parrellel environment to a highly sequential environment. The may be able to fudge it with low PhysX settings, but on high I'm sure it's trying to do the same thing a GPU does on the CPU because you have to use the same PhysX algorithms for reliability. So you end up with the CPU waiting on atomic functions and thread joins,. It really is likely just a consequence of how PhysX is implimented for the GPU structure, and is a non-trivial problem going between parrallel and sequential(with or without threading) efficiently.

Think about it. If you're processing 512 objects, that you can do in one instruction of the GPU after which the next function that requires those 512 to be done can be performed. While on an 8 core CPU you have to first, split up your threads (takes time),shuffle the data in/out of the CPU cache, while the GPU was able to transfer the same the data in one move, then when everything is done, you have to wait for every last thread to complete before starting the next function, so your waiting on your threads to join. Moving data around (especially randomly) is one of the more time consuming tasks for a CPU and so it isn't surprising that the processor may spend alot of time idle.

October 2, 2012 | 07:09 AM - Posted by Lord Binky (not verified)

I just want to be clear, I'm not saying Nvidia could not do a PhysX-ish equivalent that's efficient on the CPU and would perform pretty much the same (that may be what PhysX light is for all I know). They could get something that is 'pretty much' PhysX equivalent. But the real PhysX engine fits the very specific criteria of modern GPUs. Processing the same instructions on multiple data in parrellel is simply what GPUs are made for and designed for, and CPUs will never be able to achieve this because they are not specialized that way nor do you want them to be.

October 3, 2012 | 03:27 PM - Posted by hans.meiser (not verified)

And to expand on that point, NVIDIA has no reason to put any engineering resources into making something work better when you use a competitor's product.

I'm certainly not in favor of PhysX, but it is what it is. The CPU path isn't going to change.

October 1, 2012 | 10:42 PM - Posted by Anonymous (not verified)

Sorry but this is BS.You are saying that a faster AMD GPU can allow user to run the game with medium PhysX, while AMD GPU has absolutely nothing to do with PhysX. It all depends on CPU if you are playing with AMD GPU. However, even 16-core CPU won't be enough to cope with PhysX because Nvidia likes it this way.

October 2, 2012 | 06:48 AM - Posted by Ryan Shrout

Look at our CPU utilization graphs - it doesn't appear to be CPU bound.

October 2, 2012 | 08:19 AM - Posted by X videos (not verified)

the game is not using many cores, that's why CPU usage is low, but it doesn't mean that is not CPU bound, it is, but in a different way (more towards single core IPC), memory I/O throughput)

physx medium/high is pure CPU performance if you don't have a nvidia GPU, so if you have a 7770 or a 7970 it doesn't matter (at the worst moments, with reasonable graphic settings), the CPU is the defining factor of the super low framreates...

October 1, 2012 | 11:37 PM - Posted by Anonymous (not verified)

I'm surprised. I found it quite difficult to tell the physics difference between Medium and High in most cases except a few parts here and there that were noticeable. Ahhh how the gaming world brings us progress and new basic standards with each generation.

October 2, 2012 | 06:49 AM - Posted by Ryan Shrout

That's kind of my thought as well - I would be happy with Medium especially if there were any kind of performance hits that I noticed.

October 6, 2012 | 06:22 AM - Posted by johanpeeters83 (not verified)

it`s probably because you don`t have a high end gtx to render the high set also about the cpu blabla overhere the i3 540 igpu has 1500 mb ram for rendering the hd res 1080p and my gtx480 handles 64 x mixed sample and 16 x aniso and ultra physx
there is a difference also philips pixel plus engine runs @ 170 hz and my i3 runs easy the 170-200 fps and the gtx480 runs ambient oclusion, aniso-mode 16x, 64x msaa,you need to bypass the framerate application control mode in nvinspector to highest available lot`s of features are turned of so that people with low end hardware don`t need to shut down features before they can play

October 2, 2012 | 06:50 AM - Posted by alexthelion (not verified)

I'm running GeForce v301.42 on a Gigabyte 560ti SOC (faulty pos fyi). Buttery smooth at 1600x900, everything on full except for filtering (set at x8). I'm playing as a siren if anyone is interested - might affect physx performance.

More than playable - I was stunned at how well this game runs. I know what it's like to play a slideshow (HL2 at 640x480 and 26fps max). I've noticed some textures appear as purple and particles appear blue once in a while. I've only crashed once in 4 days of my holiday Borderlandathon - a TDR resulting from my crap card.

October 2, 2012 | 10:31 PM - Posted by whosleeps (not verified)

So if i have a 7970 ghz edition and I got a geforce card for physx, how would that work? I know it can be done, but is it even worth it? Does the cost of the extra card make it worthwhile, and if so which card to get?

October 3, 2012 | 03:10 PM - Posted by Anonymous (not verified)

It was worth it for me, but I put 300 hours into borderlands 1 and will probably put more into this one. Just get a low end $20-$30 GT430/GT440. This video shows a comparison and has some setup info specific to borderlands:

http://www.youtube.com/watch?v=8VMjYcNrzZ4

basically install the 285 driver, install the standalone physx, run the extra hybridize tool. Rename/delete the physxcore.dll and physxdevice.dll files in the borderlands bin folder. You can use gpu-z to confirm physx is enabled on the ATI card and make sure it's put load on GPU2 during gameplay

October 6, 2012 | 06:47 AM - Posted by johanpeeters83 (not verified)

every physX card runs the lowest for medium they are midrange cards and for high you need high end
make sure you put the gtx in the second pci express slot not the one close to the cpu there your biggest gpu should be when done download nvidia forceware you can run 2 different drivers no problem only make sure you set physX to the geforce
now you have all shader clusters for level of detail rendering let you play true ultra 3d textures you can use nvinspector to force ambient occlusion and when you think 120 fps on 1080p is good enough set MSAA 128x i don`t have a single framedrop , for the nvidia inspector users set AA behaviour flags to none! and AA mode to selector user instead of selector application then choose the forced AA 128x mixed sample already has super sampling so turn ssaa to none AA get done by the cuda clusters 16 for gtx580 fermi vs 8 for the kepler but kepler has better polymorph engine so they also can do 64x mixed sample 128x for sli

October 3, 2012 | 01:26 AM - Posted by Stanley (not verified)

IMHO CPU PhysX is strictly single core crap (As I know it was year ago). This is reason why overall CPU load is so low. Give us utilization graphs per CPU core!

October 4, 2012 | 01:35 PM - Posted by razor512

If possible, can you run the test using a lower to mid range card with physx on a single card, or physx running on a second low end card, eg a gt430 or lower + paired with a GTX 460 or a low to mid range AMD chip using the physx mod.

With games making more use of physx, it is worth checking if you could get better performance my adding a $30 videocard to a system with a mid range GPU (also post GPU usage of the low cost card dedicated to physx)

it may also be worth checking how much physx can be stressed before the limits of the low cost GPU is reached. (great if someone is thinking of getting a cheap GPU then keep that as a physx card for 3+ years while replacing their main GPU with more value oriented ones more often).

October 5, 2012 | 12:45 PM - Posted by Anonymous (not verified)

People always seem to forget that nvidia does a fine job using the CPU for software accelerated Physx on the Xbox360 and PS3. Neither of the GPUs in these consoles support hardware acceleration of Physx, so nvidia clearly has a smoother way to run it on CPUs only with their sdk. If they can do it on these console CPUs, then they can do a better job on PC cpus.

October 5, 2012 | 04:49 PM - Posted by razor512

It could be a purposefully imposed threading limitation.

GPU's are heavily threaded and physx must be able to use more than one thread, or at the very least, not run physx on the same CPU core that the main game is using.

October 6, 2012 | 08:49 AM - Posted by johanpeeters83 (not verified)

so in the end i7 is and the fency amd cpu`arn`t the best in gaming also sli 680 or quad sli 680 ar not as good in handling textures as a i3 540 with gtx 480 dedicated to physX and AA and tesseleration cause that`s what a dedicated to physX card does also i get 170 to 200 fps on ultra with AA 64x also my gtx480 runs 99 procent for physX 3dmark score 5800 and furmark score over 8000 beating the 7970 incrosfire and also a i7 with gtx580 (msi lighting.btw i removed the reference cooler next day after i bought it.
also a lot of people talk about physics that`s a hole lot different than physX physX is nvidia only not amd it`s a big big difference between physics and physX i said it before and i say it again

November 25, 2012 | 07:05 PM - Posted by Anonymous (not verified)

tell your cat to stop walking across your keyboard. i cant read this shit.

May 19, 2013 | 10:56 PM - Posted by Mack (not verified)

Agree and well said xD

http://www.youtube.com/watch?v=yG5dl_XEnck

October 12, 2012 | 06:45 AM - Posted by Sli (not verified)

Running with GTX680 sli and see no performance gain at all . One card shows the same results as SLI !

Things I noticed in Combat GPU usage drops from 50% down to low 15-20-30% ?

April 30, 2013 | 03:18 PM - Posted by Anonymous (not verified)

Run two different video cards one from AMD one from Nvidia by installing the AMD in the first slot and resetting the CMOS

Source: http://fbappointments.com/consumer-products/computers/144-how-to-run-2-d...

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.