Pairing up the R9 380X

Subject: Graphics Cards | May 16, 2016 - 03:52 PM |
Tagged: amd, r9 380x, crossfire

A pair of R9 380X's will cost you around $500, a bit more $100 less than a single GTX 980Ti and on par or a little less expensive than a straight GTX 980.  You have likely seen these cards compared but how often have you seen these cards pitted against a pair of GTX 960's which costs a little bit less than two 380X cards?  [H]ard|OCP decided it was worth investigating, perhaps for those who currently have a single one of these cards that are considering a second if the price is right.  The results are very tight, overall the two setups performed very similarly with some games favouring AMD and others NVIDIA, check out the full review here.

View Full Size

"We are evaluating two Radeon R9 380X video cards in CrossFire against two GeForce GTX 960 video cards in a SLI arrangement. We will overclock each setup to its highest, to experience the full gaming benefit each configuration has to offer. Additionally we will compare a Radeon R9 380 CrossFire setup to help determine the best value."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Video News


May 16, 2016 | 04:52 PM - Posted by Anonymous (not verified)

Looks like they too suffer from PCPerspective GameWorks testing suite syndrome.

May 16, 2016 | 05:15 PM - Posted by Anonymous (not verified)

Yes there will be more middleware wars for gaming, but AMD appears to be open source-ing more of their middleware! Those games middleware dependencies, and benchmarking, suites need to be monitored least things become very Antutu-ed!

May 16, 2016 | 05:49 PM - Posted by Anonymous (not verified)

That is not a surprise or unexpected! Hardocp has been promoting Gameworks related games for the last couple years. Just look at any video card review they've done and subsequent comments.

May 16, 2016 | 05:09 PM - Posted by Anonymous (not verified)

"Friday was a big day for AMD's open-source team as beyond publishing experimental Southern Islands / GCN 1.0 support for AMDGPU they also published for the first time open-source OverDrive overclocking support for the AMDGPU DRM kernel driver."(1)

(1)

"An Ubuntu/Debian Kernel To Play With AMDGPU's OverDrive Overclocking Support"

https://www.phoronix.com/scan.php?page=news_item&px=AMDGPU-OverDrive-Kernel

May 16, 2016 | 07:29 PM - Posted by Jeremy Hellstrom

Didn't get that email, damn you Phoronix as I would link to that!

May 16, 2016 | 08:02 PM - Posted by Anonymous (not verified)

"First US p e n i s transplant successfully carried out on Massachusetts man" (1)

Now all those flagship FanBoy GITs(Red and Green) with really small ones will not need those big cards to make up for their deflated ego/little wanker issues!

(1) 5/16/2015 [filter does not like P word]

arstechnica.com

May 16, 2016 | 08:05 PM - Posted by Anonymous (not verified)

edit: (1) 5/16/2015 [filter does not like P word]
to: (1) 5/16/2016 [filter does not like P word]

Damn! what year is it!

May 16, 2016 | 07:43 PM - Posted by slyons89

Seems about even in performance, but it's not a great idea to bet on good SLI and Crossfire support in many current and upcoming games. Things may get better with DX12 explicit multi-adapter, but games supporting that in the mainstream are a long ways off. A single card is the way to go with any budget under $700.

May 16, 2016 | 08:44 PM - Posted by Ha-Nocri (not verified)

HardOCP, showing NV favored games since 2000

May 16, 2016 | 08:58 PM - Posted by Anonymous (not verified)

Hahahaha FUCK SHITTY JANKY NIGGER RIGGED GHETTO POOR PEOPLE PARTS from AMD

May 16, 2016 | 10:24 PM - Posted by Anonymous (not verified)

You are one of those flagship FanBoy(Green Team) GITs with a really small one that needs those big cards to make up for their deflated ego/little wanker issues!

May 17, 2016 | 12:15 AM - Posted by Anonymous (not verified)

Looks like someone's crabby because their preferred brand lost again. Poor little racist crybaby.

May 17, 2016 | 04:35 PM - Posted by Not_Anonymous (not verified)

Time to block this guy's ip. He comments on every AMD related article. He makes inflammatory comments about the authors and anyone who would consider AMD products. And he isn't even original. He just repeats poor/peasant. Janky? Amd gpus are designed in Canada btw.

May 17, 2016 | 06:03 PM - Posted by Anonymous (not verified)

I've had PCPer on my AdBlock whitelist for a couple years now. I think it might get turned back on until they do something about that guy.

May 16, 2016 | 09:43 PM - Posted by Anonymous (not verified)

I don't understand why these people change settings on certain games, I can understand turning off ASync for Nvidia or GameWorks features for AMD, but comparing High settings on one card to ultra to another card?
Man that is fucking stupid.

Even so the R9 380X CF still does much better than the 960 SLI - which is not surprising as even the R9 380 is better than a GTX 960.

May 17, 2016 | 06:45 PM - Posted by Anonymous (not verified)

You have to approach that site's benchmarks with a few understandings in mind:

1. You pretty much have to take their results in concert with other site's results. Once you understand [H]'s methodology and purpose, their results can stand on their own, but when viewed through the same lens as one views another site's benchmarks, they don't make sense.

2. For what [H] is looking for, they don't necessarily want to use the same settings - unless it just turns out that way. In this, [H]'s benchmarks (not including the "Apples-to-Apples" sections) are a bit more objective than others. [H] is looking for the "highest playable settings", which in many instances would do a lot to highlight particular advantages the subjects might respectively have. For example, take a look at the Fallout 4 benchmarks, which gives a prime example of what you're questioning, as well as a prime example of why they did it. The 380X-CF is running about 17-19 FPS faster than the 960-SLI, but they had to turn down Godrays and AO to do it. That's what they, [H], determined was the "highest playable settings", which means that turning anything up higher meant dropping the framerate to an unplayable rate. The 960-SLI had no trouble running at a playable framerate with Godrays and AO maxed out, even if its average was lower. This shows two things: one, that the 960-SLI has a big advantage with Godrays and AO, and two, that those technologies put a beating down on the 380X-CF. With a normal equal-settings benchmark, using the settings that were the highest playable settings on the 960-SLI, all you would see is the 380X-CF getting thrashed. Compare it to [H]'s results and you can see WHY the 380X-CF got thrashed.

3. The "Apples-to-Apples" sections are there to be that correlation, as well. They don't care about playability, there. They care about working the cards as hard as they can with the highest settings possible, playable or not. This is the equal-settings comparison. If the "highest playable settings" highlights advantages, the "Apples-to-Apples" highlights weaknesses.

Don't take [H]'s results all on their own, but don't write them off either. Keeping their intent in mind, compare their results to other sites and the resolutions and settings they used, and take it all as a whole.

May 18, 2016 | 09:39 AM - Posted by Anonymous (not verified)

All Cards should be stress tested like this, to point out any weaknesses, with a note to the readers that this testing is for the hardware's stress testing only, and that most games/game settings can be adjusted for playability. Also any card that can work, with any group of settings maxed out, at a playable frame rate while the other card can not deserves to be praised!

Now over the next year for new Nvidia and AMD cards, and also for their cards from the previous 2 years, testers should go back and test with DX12, and Vulkan based games(for both windows OS and Linux OS based games)! This is just to see which new cards perform better under DX12/Vulkan, and which older cards, if they can even use/be made to use newer DX12/Vulkan APIs by any software/driver/other tweaks, perform better under DX12/Vulkan(provided there is/are any in software/driver/firmware tweaks to the older cards to allow for DX12/Vulkan on the cards going back 2 years).

So let's see who had the older cards with the best async-compute ability from the past 2 years, and who has the new cards with the best async-compute ability over the current year following the release of both the first Pascal cards(already here), and the first Polaris SKUs! The first Polaris cards should be here in about a few weeks.
Over the next year there should be both Vega, and Volta SKUs to compare also.

May 17, 2016 | 02:43 AM - Posted by Anonymous (not verified)

Dose anyone edit these articles? The first two sentences make no sense, whatsoever.

May 17, 2016 | 08:59 AM - Posted by Cantelopia (not verified)

I get testing a card(s) to its maximum capability, but can anyone really tell the difference between High and Ultra settings while playing?

May 17, 2016 | 12:36 PM - Posted by Anonymous (not verified)

LOL...

Not usually...

In blind tests without looking at the graphic options I'd be curious if even one person would be able to tell the difference beyond guessing.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.