Review Index:

Quad-Core Gaming Roundup: How Much CPU Do You Really Need?

Subject: Systems
Manufacturer: PC Perspective
Tagged: quad-core, gpu, gaming, cpu

Introduction and Test Hardware

View Full Size

The PC gaming world has become divided by two distinct types of games: those that were designed and programmed specifically for the PC, and console ports. Unfortunately for PC gamers it seems that far too many titles are simply ported over (or at least optimized for consoles first) these days, and while PC users can usually enjoy higher detail levels and unlocked frame rates there is now the issue of processor core-count to consider. This may seem artificial, but in recent months quite a few games have been released that require at least a quad-core CPU to even run (without modifying the game).

One possible explanation for this is current console hardware: PS4 and Xbox One systems are based on multi-core AMD APUs (the 8-core AMD "Jaguar"). While a quad-core (or higher) processor might not be techincally required to run current games on PCs, the fact that these exist on consoles might help to explain quad-core CPU as a minimum spec. This trend could simply be the result of current x86 console hardware, as developement of console versions of games is often prioritized (and porting has become common for development of PC versions of games). So it is that popular dual-core processors like the $69 Intel Pentium Anniversary Edition (G3258) are suddenly less viable for a future-proofed gaming build. While hacking these games might make dual-core CPUs work, and might be the only way to get such a game to even load as the CPU is checked at launch, this is obviously far from ideal.

View Full Size

Is this much CPU really necessary?

Rather than rail against this quad-core trend and question its necessity, I decided instead to see just how much of a difference the processor alone might make with some game benchmarks. This quickly escalated into more and more system configurations as I accumulated parts, eventually arriving at 36 different configurations at various price points. Yeah, I said 36. (Remember that Budget Gaming Shootout article from last year? It's bigger than that!) Some of the charts that follow are really long (you've been warned), and there’s a lot of information to parse here. I wanted this to be as fair as possible, so there is a theme to the component selection. I started with three processors each (low, mid, and high price) from AMD and Intel, and then three graphics cards (again, low, mid, and high price) from AMD and NVIDIA.

Here’s the component rundown with current pricing*:

Processors tested:

Graphics cards tested:

  • AMD Radeon R7 260X (ASUS 2GB OC) - $137.24
  • AMD Radeon R9 280 (Sapphire Dual-X) - $169.99
  • AMD Radeon R9 290X (MSI Lightning) - $399
  • NVIDIA GeForce GTX 750 Ti (OEM) - $149.99
  • NVIDIA GeForce GTX 770 (OEM) - $235
  • NVIDIA GeForce GTX 980 (ASUS STRIX) - $519

*These prices were current as of  6/29/15, and of course fluctuate.

Continue reading our Quad-Core Gaming Roundup: How Much CPU Do You Really Need?

Looking over the list above I’m immediately drawn to the R9 280 at just $169.99 on Amazon, a great price for a mid-range card (we'll see how well it performed). On the NVIDIA side I'll note that my choice of mid-range card might be questioned, with the GTX 960 the new $200-$220 option; I picked the 770 over the 960 simply because I already had one on hand to test. Finally, all of the AMD cards tested were overclocked retail models, such as the MSI R9 290X Lightning with a 1080 MHz core. Stock performance will be a bit lower, but these are all off-the-shelf cards and nothing was run beyond retail spec. If it was overclocked by the manufacturer, I ran it that way. Each platform was configured using default settings, with 8GB of dual-channel 1600 MHz DDR3 used for each testbench.

Making Sacrifices

View Full Size

The six games used to benchmark this hardware

For game testing I made the decision to use only automated benchmarks for the sake of consistency. I wanted to eliminate the possibility of variance with the test results, as there will often be minute differences in the results with this much hardware. As a result of this there were some excellent candidates that I simply couldn't use without an automated benchmark tool. I tried to vary the mix of games to provide an uncolored look at the true potential performance of the hardware, and of the 6 games selected half use AMD's Mantle API (these were tested with DX11 as well) and at least one (Civilization: Beyond Earth) is known to be highly CPU-bound.

All tests were run at both 1920x1080 and 2560x1440 resolution, with three identical runs at each resolution for each hardware component. Drivers were current when testing began in January, and therefore out of date by current standards. This was necessary to provide a true comparison between hardware results.

View Full Size

AMD cards were tested using Catalyst Omega 14.12

View Full Size

NVIDIA cards were tested using GeForce Game Ready Driver 347.25

Windows 8.1 64-bit was used for all game testing, and games were loaded using identical Steam backups for each title.

Without further preamble let’s get to the test results!

July 6, 2015 | 01:01 PM - Posted by collie

WOW. that's one hell of a project. Ok, so, do yourself a favor. Open the door and go outside, DONT BE AFRAID OF THE YELLOW HOT THING THAT IS UP!!!! that is mr sun, mr sun is your friend.

July 6, 2015 | 01:34 PM - Posted by Jeremy Hellstrom

That's a lie and you know it collie, that is the big yellow pain ball.

July 6, 2015 | 03:53 PM - Posted by Anonymous (not verified)

Run away!!!

July 13, 2015 | 02:13 PM - Posted by anubis44

Actually, the sun is essentially 100 million fusion bombs going off every second, with hydrogen fusing quite readily due to the incredible gravity-induced pressure in its core. It's really not that friendly, and really, nobody should get more than about 10 minutes of sun a day, and even then, you should wear sunscreen.

Basically, lying in the sun on a beach is not much different than seeing an atomic flash in the distance and saying 'Great! Time to to get a tan! and then removing your clothes so you can soak in the radiation.

April 4, 2016 | 09:47 PM - Posted by Anonymous (not verified)

The sun's rays (at least the ones past our atmosphere) are non-ionizing. The kinds of radiation from uranium power plants are ionizing. Ionizing radiation messes with our cells, while non-ionizing radiation is harmless.

July 6, 2015 | 04:14 PM - Posted by Sebastian Peak

But it's so bright! I'm afraid.

July 6, 2015 | 01:35 PM - Posted by -- (not verified)

860k is a hell of a buy for a gaming rig.

July 6, 2015 | 01:44 PM - Posted by David (not verified)

That's what I have in my rig as a stop gap until both Skylake and Zen show up. I have the 860K overclocked to 4.5 Ghz with a 212 Evo and a bargain basement R9 285 on a MSI A88XM-E35 V2 Mobo it destroys my previous Sandy Bridge i3 build. Everything runs great including The Witcher 3 apart from Arkham Knight (Who's surprised eh?)

July 6, 2015 | 02:36 PM - Posted by (not verified)

i got that motherboard too a steal at £40

July 6, 2015 | 03:52 PM - Posted by -- (not verified)

yep FM2 motherboards are a steal!! no reason to give intel any money when this 70.00 chip runs with a 250.00 intel

so much value for the money its awesome!

July 6, 2015 | 04:54 PM - Posted by arbiter

If you base that off results using mantle might think some more about that. it looks like i5 4440 is overall best on avg. yea Mantle helps 860k but AMD has dropped work on it completely now so only few games that have it now will do well in even then no for sure that it won't be removed later.

July 6, 2015 | 05:20 PM - Posted by BillDStrong

Yes, but all future games will get the benefit of a Mantle like API, if they use DX12 or Vulkan. And AMD continues to support Mantle on those cards that can use it, as it would be a huge blow to those companies they brought on to support it to remove support.

July 6, 2015 | 10:27 PM - Posted by Anonymous (not verified)

Mantle is not gone, it's been ported to Vulkan, and expect AMD to keep Mantel going internally as a testing software platform for its driver development, any new tweaks or improvements will be included in Vulkan. Lots of Steam OS based games will be utilizing Vulkan, and I'm sure that Vulkan will be available on windows computers also, a lot of applications utilize the Khronos groups APIs, including a fair chuck of the mobile market.

July 7, 2015 | 02:34 AM - Posted by arbiter

AMD has said Mantle develop and optimization is DEAD. so any game with it now well is vaporware now. even with "vulkan" that is opengl which is mostly in linux which amd's drivers are need i say its massively slow and behind.

July 7, 2015 | 09:04 AM - Posted by obababoy

I think we need to wait until DX12 to see gaming reviews and utilization.

July 6, 2015 | 01:38 PM - Posted by Dunecatfish

Great article Sebastian.This will help me recommend some computer builds/parts for some people.

July 6, 2015 | 04:15 PM - Posted by Sebastian Peak

Awesome, thanks! Good to know some of this will be put to good use

July 6, 2015 | 01:46 PM - Posted by Anonymous (not verified)

The I5 doing so well against the I7 is probably
due to the negative effect of HyperThreads
if a program limits the number of cores it will work with.

If the program uses only 4 cores then if the 4 it grabs to use
are reallly HTs then the performace will suffer.

With the I5 the program will always be grabing 4
real cores.

Found this out with encoding on a 6 core/6HT 3930
on a program that would only use 8 cores.
It was 13% slower with HTs enabled.

July 6, 2015 | 05:13 PM - Posted by Anonymous (not verified)

Good point, maybe as a follow up article PCPer could do testing of HT on vs HT off in games? (I know this gets done every few years by other sites but I haven't seen one recently with the 4790k)

Great article Sebastian.

July 6, 2015 | 05:38 PM - Posted by VikingVR

The function of Intel smart cache is at the hardware level where the physical cores have shared access, not the logical cores. Hyper threading is marketing slang for SMT, where the parallelization of threads are optimized, prioritized, and ordered through the core's pipeline.

IMHO, your experience in the performance hit was probably due to simultaneously seeding and looping the cache at the pipeline level leading to a bottleneck in the resource. So not that the "Logical" cores were the issue, rather the overall cache miss rate was high due to the resource bottleneck. Also I think since the i5 4440 has a smaller L3 cache (6MB), it's faster to search than the i7's 8MB cache and probably resulted in lesser cache miss rate.

July 23, 2015 | 04:17 AM - Posted by Anonymous (not verified)

HT is rarely detrimental for gaming, though it can be. The similar results are not because HT is detrimental here but rather that they are not needed. The advantage is probably completely due to the frequency difference.

Having said that, an i3 with only two physical cores can often run better with HT enabled, provided the main game thread isn't assigned to one of the hyperthreads.

I've only found a handful of games that suffer from HT issues (Witcher #1, Spider-Man WOS, and I think some of the Ubisoft titles like Far Cry 3).

Future proofing with DX12 has to be tempered with the fact that most people will still buy older games as well.

As of the time of this writing I'm recommending the i3-4160/4170 if for gaming AND if the graphics card budget is under $200. Once you creep higher than $200 I start to recommend an i5-4xxx.

Overclocking (and CPU cooler), motherboard etc all factors in as well.

Nice article! Would love to see a similar price/performance graph based on automatically updated pricing though that is probably difficult to do as rebates, product quality and again motherboard and other costs are hard to add in as well thus you do have to LEARN a bit to choose your optimal setup.

July 6, 2015 | 01:57 PM - Posted by JohnGR

WOW!. This is a HUGE review. Or should I say project? I think project is better.

Thank you very much. Too much interesting info in these charts.


July 6, 2015 | 04:17 PM - Posted by Sebastian Peak

Project is better, yes. Sad thing is I wanted to do so much more! But who wants to look at charts with 100 things on them? (Plus, who has that kind of time!)

July 6, 2015 | 05:22 PM - Posted by BillDStrong

Sounds like a great excuse to create an advanced chart that would allow us to select the info we are interested in, and fit the chart to that. Do the work once, then you can use it on all articles.

July 6, 2015 | 01:55 PM - Posted by kunglao

I am a proud owner of I3 4130 and soon to be GTX 970. Man I am confused how the heck that little i3 outperformed the Devils Canyon several times with a high end GPU

Edit: Thanks for taking the Time sharing this and 6 months. WOW Cuddos M8

July 8, 2015 | 02:10 PM - Posted by fade2blac

Like the author and many of the readers, I too am quite surprised by the i3-4130 taking the lead on some of the tests. However, it appears to have been when the following are true:

-when paired with AMD cards
-while using Direct X as opposed to Mantle
-the test is primarily GPU-bound

This leads me to think it may be driver related. Perhaps it is a symptom of poor multi-threading in AMD's Direct X drivers. The R7 260X (Bonaire) and R9 280 (Tahiti Pro) both have only 2 shader engines/geometry processors and 2 ACE's compared to 4 and 8 respectively in the R9 290X (Hawaii). If the narrow front end of the R7 260X and R9 280 architectures limits the effectiveness of more threads, this still does not explain Shadow of Mordor where even the R9 290X liked being paired with the i3-4130. Perhaps by disabling hyper threading on the i7-4790K one can see if more L3 cache per thread/core has some relevance.

It is definitely interesting to think about what correlations can be made from all this data. At a minimum, it shows that for AMD Mantle makes a very tangible difference in CPU-bound scenarios. Looking at numbers for nVidia's cards it seems GPU performance scales in a more predictable fashion relative to expected CPU performance.

P.S. I appreciate the amount of effort that goes into a project like this. Hopefully you get some extra allowance for the hit on your power bill!

July 6, 2015 | 02:32 PM - Posted by kenjo

That was interesting. If the mantel results is transferable to dx12/vulcan the cpu is going to be almost irrelevant with new games. but I guess they find something else to load it with when the api get more streamlined.

July 6, 2015 | 05:24 PM - Posted by BillDStrong

Yep, games that have the CPU as a bottleneck will have more room to breathe. And, we can always find more uses for the CPU. Better Physics, more units on screen, better AI and lots of fun effects.

July 6, 2015 | 02:34 PM - Posted by Goofus Maximus (not verified)

I commend you on doing all that while having a baby that was too enthusiastic about getting to see the outside world.

Time to get some sleep. :)

July 6, 2015 | 02:36 PM - Posted by Ryan Shrout

While I appreciate the sentiment, Sebastian deserves all the credit here. :)

July 6, 2015 | 04:20 PM - Posted by Sebastian Peak

Yes but with mo' credit comes, I am told, mo' problems

July 6, 2015 | 02:39 PM - Posted by JonFX01 (not verified)

Aside from Metro: Last Light, these games aren't CPU intensive and per core power hungry. Sure, you can game on the Athlon 860K just fine most of the time which is amazing when you are on a tight budget, but you will inevitably run into problems when playing multiplayer on heavily populated servers in Planetside 2, Arma 3 and other MMO's where lots of players need to be tracked in real time.

For single-player, the 860K should make you set unless you're playing something physics-heavy. Keep in mind, however, that some RTS' like having that intel's strong single-core performance too.

Also, the Bulldozer architecture suffers from CMT's inherent flaw - resource sharing over 2 cores in a single module. Games that need strong single-power performance should be coded in such a way that they keep the second core, in the module where the primary core for a given application is, idle so that the primary core has all the resources for itself.

July 6, 2015 | 02:56 PM - Posted by Anonymous (not verified)

For most MMOs, a lot of processing will be done on the server side.

July 6, 2015 | 05:37 PM - Posted by Anonymous (not verified)

Still thethe fact remainaremains that a strong CPU is more important in multi-player games. What you said doesn't matter.

July 6, 2015 | 11:49 PM - Posted by Anonymous (not verified)

I don't think it is that much more important, in fact it may actually be less important. With the processing going on the server side, another player looks just about like an NPC as far as the local machine is concerned. Most MMOs have very simplified, if not non-existent collision detection. This is due to the latency involved with communicating over the internet. A game which does detailed collision detection and othe game mechanics which are not easily doable over the internet will probably consume a lot more CPU cycles. Unless the game takes ridiculous amounts of bandwidth (most do not) I would expect them to not be that CPU sensitive. Many years ago, I used to play WoW over a 56k modem connection. There isn't actually that much information to send.

July 8, 2015 | 11:38 AM - Posted by ppi (not verified)

Specifically WoW is quite CPU dependant in 25-man raids, when all the action goes on.

It is because CPU also has to process all those NPCs and fireballs and whatnot flying around. Plus UI mods.

July 10, 2015 | 02:28 AM - Posted by sob3kx (not verified)

Have been playing WOTLK on C2D 1,86 GHZ for years and had no problem...

July 9, 2015 | 09:02 PM - Posted by Anonymous (not verified)

You're flat out wrong. As someone who's a mmo expert, a powerful cpu is the single best thing for any game that has many AI's or players on screen at once. Period.

I have tested this many times with my own builds and mmo's and can say without a doubt I would trade a 400$ gpu for a 200$ gpu and put that money into the cpu. The pure horsepower of your cpu is the single greatest factor in games like Diablo 3, GW2 world vs world, Planetside 2, etc. Anything with high model counts.

July 7, 2015 | 09:32 AM - Posted by jin (not verified)

Sadly,most MMOs are poorly optimized and are only rely on single thread performance.

July 8, 2015 | 11:49 PM - Posted by -- (not verified)

the 860k is no slacker.

Arma 3? really ARMA? you could have a cray super computer and arma will run junky, its a poorly programmed game.

July 6, 2015 | 02:43 PM - Posted by Crazycanukk

I went way over the top with my build and i know right now its over kill. I went with a 5960x myself. Many (not all) people will build a new system even with a 4790k when the next hot CPU hits the market in a few years and repeat that cycle again a couple years later. I was one of them for a long time. Even with a quad core i wanted more performance and the newest cpu always enticed me with its specs.

This time i went all in and built a monster rig. I built it with the thoughts of direct x 12 going forward and its cpu utilization being better, and with the next 6-7 years in mind.

A quad core is all you need currently, there is no argument there at all. its more then enough..but for myself 8 cores with 16 total threads are more able to carry me through even longer than my quad core builds did. and i'm finding other things like video processing and encoding gets done in blazing fast time.

Last night i was playing Witcher 3 streaming via Steam to my livingroom TV AND encoding 5 hours of old home movies i had just copied from VHS tapes AND running a few other minor tasks. not a single hitch.

And yes im finding more games than ever utilizing most of my cores when gaming..especially when i unpark all the cores. Witcher 3 uses 6 cores most of the time and BF4 does as well just as examples. DX12 will make this even more of a reality. :)

Either way good article Sebastien...getting ready to build a system for a friend and this article will definitely help him choose a CPU for that build.

July 6, 2015 | 04:56 PM - Posted by arbiter

There was a problem someone put on steam called "cpucores" that can limit process to real cores not HT ones.

July 20, 2015 | 11:48 PM - Posted by Mike11585 (not verified)

Well i went with an FX8350,i know it's just not as good as a good i5 for most games but DX12 will basically take much of the Cpu bottleneck out.Most folks need to understand if you game at 2k-4k a 8350 will run any of these multi core game engines no problem.Older titles will favor Intel until AMD puts out Zen ''i hope it it awesome''.

July 6, 2015 | 02:59 PM - Posted by Colin (not verified)

This is the best article I've read on PCPer in AGES. Seriously, outstanding work. Thank you so much for taking such a thorough approach. Bravo sir!

July 6, 2015 | 03:01 PM - Posted by Colin (not verified)

My 2500K + R9 290 x-fire is going to last me a LONG time unless I go 4k Samsung 48 inch TV... then I need HDMI 2.0.

July 10, 2015 | 02:29 AM - Posted by sob3kx (not verified)

There will be reduction to DP this summer/fall.

July 6, 2015 | 04:22 PM - Posted by Sebastian Peak

You're too kind :)  Wish I had finished it sooner. Plus there's more that can be done with this massive Excel spreadsheet I've made.

July 6, 2015 | 03:00 PM - Posted by mLocke

Your efforts are greatly appreciated.

July 6, 2015 | 04:35 PM - Posted by Sebastian Peak

Thank you!

July 6, 2015 | 03:08 PM - Posted by Luis Diaz Manzo (not verified)

If you still have the data, could you consider adding the price of the minimum capable motherboard and capable PSU to all CPUs to the price division to please consider that variable in the case for the FX-8350 and FX-9590? I think that a 6-phase motherboard shipped adds at least 8$ more to the price consideration of the FX-8350 and that a 10-phase motherboard shipped adds at least 35$ extra to the price of the FX-9590, and almost the same so building with the FX-9590 ends up being ~85$ more expensive than the FX-8350 and this ends up being more expensive in like ~22$ to build than with than the Athon and a comparable Intel, with the exception of Intel i7-4790K that could-should need a 6 phase motherboard.

July 6, 2015 | 04:24 PM - Posted by Sebastian Peak

That's a good idea. I didn't cover motherboards... Could be a good followup post!

July 6, 2015 | 03:23 PM - Posted by Anonymous (not verified)

The i5 technically has more cache per thread than an i7. There is also going to be other differences due to shared resources from HT. It might be interesting to disable HT and see how they compare for one of the test showing skewed results. The i7 has 8 MB of cach vs. 6 for the i5, but the i7 shares it with 8 threads instead of 4. The i3 has less cache per thread than both of the others though. It might be interesting to check the cache latency on the i3 vs. other Intel processors. Current processors are extremely cache bound due to large disparity between processor speed and memory latency. Small changes in cache performance could produce large performance differences. I thought all haswell parts were the same die though, so it seems like they should be roughly the same. Is it possible to test with the 59xx or 58xx parts with limited to dual channel memory? I have considered getting a 5820for the larger cache. It makes a big difference for some applications.

July 6, 2015 | 04:23 PM - Posted by Sebastian Peak

Interesting info. This could explain some of the i5 results... The i3 just didn't make sense - almost as if either it or the GPU were running OC. But I know for a fact the CPU wasn't, so now I blame the GPUs... I don't have these anymore or I'd re-run those tests

July 7, 2015 | 12:05 AM - Posted by Anonymous (not verified)

I might do a little research if I have some time. The i3 only has 3 MB of cache compared to 6 MB for the i5 and 8 MB for the i7. It is possible that a cache look-up is actually faster for the smaller cache.

July 6, 2015 | 03:27 PM - Posted by Anonymous (not verified)

"There will soon be a new version of Windows, a new standard graphics API," What about some more testing once the DX12 and Vulkan(the Open standard graphics API) APIs are RTM. I'll be looking for the Steam OS reviews/benchmarks alongside the Windows OS reviews for all the graphics SKUs. It's going to take at least a year of testing on the new graphics APIs just to get a baseline amount of statistics to make an educated guess. AMD's Mantle driver branding is no longer around, but the metal is close in the public facing version of the Mantle API's features(Vulkan), so I'll expect AMD will have its Internal "Mantle" improvements ported over to Vulkan as quickly as possible, Khronos needs to keep a faster pace of improvements coming, and the Steam OS/Steam gaming ecosystem should help keep the Khronos group's Vulkan in lockstep with the latest.

One big question remains about Multi-Adaptor and any Khronos/Vulkan equivalent for multi-adaptor that may be available. How will gaming performance be assisted when both the integrated graphics and discrete graphics can both perform graphics tasks for gaming or other workloads. What will the games makers do to make use of multi-adaptor's abilities and performance improvements. There will be better multicore CPU support going forward, and AMD's and the HSA foundation's work to further meld the CPU with the GPU for improved workloads all around, in addition to gaming. I hope the CPU/SOC manufacturers take more time getting the all around/all OSs support for their product's respective graphics drivers in order, and working for DX12, and Vulkan.

July 6, 2015 | 03:38 PM - Posted by Anonymous (not verified)

Thank you so much for this one!

I even got to play along and test my budget gaming rig with your results.

I compared my Intel Core i5-4570 3.2GHz + 970 OC (on windows 10) with your I5-4440 + 980 ref clock.

I just beat the 980 paired with 4440 in all results.(Hot damn)

my 970 is 1512mhz overclock & 500+ on samsung memory (2003Mhz in gpu-z) So that had something to do with it.

I remb reading all the 970 reviews during lunch week of maxwell 970 vs 980(980 only have ref models during lunch), and seeing max clocks of around 1500 matching 980 ref fps results. Glad to see none thing has changed.

Having a 320 dollar gpu beat/match a ref clocked 500 buck gpu makes me smile. I think any future buyers of 300-400 dollar gpu range would be served by Max OC 970 vs 390/x models reviews, as the 970 offers a hell of a lot better value vs 390x in my thinking once max OC is taken into account.

You hear the classic 3.5 gb vs 8 gb 390/x form AMD fans. However windows 10 just made this a none issue.

GPU memory is now all 1 big virtual block under windows 10 WDDM 2.0(with lower memory overhead to boot.)

Now games just see 1 large virtual memory block.

Now games use all memory it wants (it's just big endless block) to store ever thing into(Driver/gpu controls where it's all placed with 2.0 and not windows anymore)

For for example, this is me using Ultra Texture Quality in mordor with my OC'd 970 and using 5272 of my video (Virtual) memory.
Who said you can only get away using ultra as on 6 GB cards or higher!

The funny part is FPS is not even really effected by using my ram as vram to help out with these insane textures settings.

My rig results matching your high quality preset:
FPS 131.29 min FPS 80.86

Changed just the Texture Quality to Ultra with rest matching your settings:
Average FPS 125.16 min FPS 80.90

My point a 320 buck 970 destroys in value the newer (and over priced) 390x for 100 bucks less.

Twice the vram for 100 bucks more /with it only matching the performance of 970 OC'd at best.

*pricing based on MSI 970 (329.99) vs MSI 390x ($429.99)

July 6, 2015 | 04:29 PM - Posted by Sebastian Peak

Play along at home? I like it!  Good results, too.  I'm with you as far as overclocking a cheaper part to match the next model up.  Always had fun with that.  My favorites were my BIOS-modded Radeon HD 5870 @ 1 GHz and my GTX 560 Ti running at 900 MHz.  Good times

July 6, 2015 | 05:07 PM - Posted by Joe (not verified)

Dude it's nice to see you are happy about your card. Coming from a fellow 970 owner they are really good cards, but not great cards. This was a very good article from Sebastian, good work man. But be careful with your comments because you will turn this into flame war real quick, because you are directing a lot at AMD specifically the 390x. I think all cards are over priced but that's just me. I would like to see comments continue to be constructive on this article.

That might change with your comments about the 390x. It might not have been your intentions but people might look it at as a chance to start the crap. Or I could be wrong and maybe it was your intention. I just don't want to see a really good article ruined with all the bs of the flamers.

July 6, 2015 | 05:41 PM - Posted by BillDStrong

Your comments about the 3.5GB of memory are wrong. That was and is handled by the Nvidia Driver, so DX version has no control over it. WDDM has a little control over it, in as much as Nvidia has to follow the driver model.

Any performance you see will be from WDDM changes, and Win 10 optimizations. The DX12 won't make a difference until you have games that use DX12.

If you run the tests that showed the 3.5GB memory segmentation originally, it will still be there. Software can try to hide hardware, but you can always find it if you know were to look.

I put this in before actual fanbois start a riot.

July 6, 2015 | 07:16 PM - Posted by Anonymous (not verified)

"Any performance you see will be from WDDM changes, and Win 10 optimizations." Yes that what i mean by lower overhead for memory.

As for "fanbois start a riot." I just shoot at my self first then before they can.

Take the "Overclocking the R9 390X news item that pcper linked on front page.

In it they compared it to 980 at 1516 clock, damn near my 1512 clocked 970.

Will i know my 970 has about 18% less cores then 980. (same 204 chip)
So whatever any results it gets compared to 390x max overclock, mins 18% would be how 970 matches up with max 390x

Farcry 4 with gameworks on:
"overclocked GeForce GTX 980 is a massive 38% faster than the overclocked R9 390X. The GTX 980 is a lot better at Tessellation"

Am sure a 970 would beat 390x OC by around 19%.

dyling light:
The overclocked GTX 980 is 26% faster than the overclocked MSI R9 390X.

so 970 only be like 8% faster 26% minus less cores = 8%

The Witcher 3: Wild Hunt:
the overclocked GTX 980 is 19% faster than the overclocked MSI R9 390X at these settings.

so it be 1%, or dead even with 390x OC.

And ofc these next 2 games my 970 would lose to 390x hands down.

Battlefield 4
The overclocked GTX 980 is only 7% faster than the overclocked MSI R9 390X in this game.

will i think my 970 would be about 11% slower then a 390x.

GTV 5:
The overclocked GeForce GTX 980 is about 10% faster than the overclocked MSI R9 390X at these settings.

Again my 970 would be about 8% slower then fastest 390x. So i give 390x it's due's.

I sure think if someone just did what hardocp did with 390x and 390 (none x) and compared it to 970 Max OC.

If did it for 1080p & 1440p You see 970 trading blows with 100 bucks higher cost card and it killing 390 hands down that is priced same as it.

I talked about WDDM 2.0 changes of windows 10, making the point that 8GB of 390s is not winning AMD hand over 970 everyone thinks it is.

July 7, 2015 | 01:01 AM - Posted by dpa (not verified)

Price of the entire 300 series indicates amd is commiting suicide.

July 6, 2015 | 05:08 PM - Posted by nevzim (not verified)

No frame times? Please at least sort by minimum framerate. Thanks.

July 6, 2015 | 08:08 PM - Posted by Lance Ripplinger (not verified)

I always suspected the i5 chips from Intel were a winner. Amazing that those AMD chips are still hanging in there, given how old they are now. This article confirms that I will keep recommending i5 chips to clients. I hope Ryan sends you on vacation to Tahiti or something for this!

July 7, 2015 | 01:23 AM - Posted by Sebastian Peak

Paid vacation to tropical destinations, eh? I will forward this along through the proper channels :)

July 6, 2015 | 10:15 PM - Posted by Cw (not verified)

Is it possible to look at different core speeds for the i7 and the amd 85xx?
Say 2.0, 3.0,4.0 and the max for each chip?

July 7, 2015 | 12:44 AM - Posted by Sebastian Peak

It's possible, sure. But I would need to outsource that kind of testing. Interested? :)

July 7, 2015 | 09:18 PM - Posted by CW (not verified)

Yes I would. I have a g3258 I am thinking of upgrading. It does 4.2ghz on air and more on water. The questions I have are do cores out weigh ghz. I could just replace the chip with an i5. But tempted to get a x99 itx with a 5820 that would require much more cost. The other two thoughts are xeonD or wait for skylake

July 6, 2015 | 10:37 PM - Posted by Anonymous (not verified)

sorry but the Sniper Elite results at 2560 defy all logic. the 980 and 770 with the 9590 completely blow the 4790k away with the same cards. the 9590 is 50% faster than the 4790k with a 980 which in no way shape or form would happen.

July 6, 2015 | 10:44 PM - Posted by Anonymous (not verified)

Hmm no edit button?

it is clear after looking a little hard that all you did was carry over some of the 9590 performance from 1080 to 1440. the 176.3 is the exact same framerate for the 9590 and 980 at both resolutions. you clearly screwed up but then acted like the 9590 was performing normal by your comments. PLEASE go back and fix your mistake and edit your comments under the 2560x1440 results.

July 7, 2015 | 12:39 AM - Posted by Sebastian Peak

Zero information was reused or "carried over". I invite you to assemble the same hardware and run each test at both resolutions, three times each, and average the results to one decimal point as I did.

I have no problem going back to the original benchmark data for each card to ensure I didn't make any errors, but please don't make careless accusations of falsification. There's just no reason for me to do that and such behavior would negate the usefulness of the entire project.

One possible explanation is that I had inconsistent results with the FX 9590. If you have one of these you'd know that a 220W TDP processor consumes a TON of power and runs incredibly hot. It is almost impossible to run one at load without experiencing throttling. I had it running on an ASRock Fatal1ty 990FX Killer motherboard with a Corsair H105 set to high fans, and a 1000W SilverStone Strider PSU powering it. If this combo wasn't enough then I guess I'm not up to running a 9590. It's not worth the money over the 8350 it's based on anyway.

I will post my findings.

July 8, 2015 | 06:08 PM - Posted by Anonymous (not verified)

well sorry but you are flat out and wrong and you are too ignorant and stubborn too see that. anyone with an ounce of common sense knows darn well a a 9590 is not 50% faster than the 4790k. you dont need a CSI teem to see that all you did was carry over the lower resolution framerates for a couple setups. whats hilarious is that you actually think the 9590 and 980 is getting the identical framerate at the higher res yet the 4790k fell behind 50%. lol pull your head out of your rear end and fix your numbers.

July 8, 2015 | 07:17 PM - Posted by Jeremy Hellstrom

Better watch out Sebastian, apparently you have a stalker who has been monitoring you 24/7 since you started this project and is more familiar with how you tested and analyzed that you are yourself.

Hope they at least chip in on the bills while breathing heavily down your neck.

July 9, 2015 | 07:51 PM - Posted by Sebastian Peak

I almost don't want to tell him that he's right and thank him for pointing out a simple mistake in a huge graph, because he's having far too much fun with this as it is!  :)

July 7, 2015 | 01:12 AM - Posted by adogg23

i think i made a good choice when i got my 4690k on sale for 170$. Thanks for taking the time and effort to produce such a great presentation for future cpu decisions.

July 7, 2015 | 01:22 AM - Posted by Sebastian Peak

Thanks. I think this has also convinced me that a Core i7 isn't needed for even a high-end gaming rig. Granted there are plenty of applications that take advantage of the extra threads, but games don't seem to.

July 7, 2015 | 02:33 AM - Posted by BillDStrong

At least not yet. The move to DX12 and Vulkan will add the ability to take advantage of more cores. What you may find interesting at that time is you can have more threads at lower clocks and still max out the GPUs, at least the single GPU setups.

Luckily, VR is coming, and it will need all the power and speed you can get, at least for a few iterations.

But, it will change the whole dynamic of these tests. Cache misses will have different effects than they do now, and memory latency is going to play a bigger role.

July 7, 2015 | 09:14 AM - Posted by obababoy

I have an I7 4770 but I am more excited for AMD with upcoming DX12! Some cheap 8 core CPU's might be awesome!

July 8, 2015 | 04:14 AM - Posted by Ben (not verified)

Try the Witcher3... ill think you'll find it will take every core you throw at it.

July 7, 2015 | 01:12 AM - Posted by Anonymous (not verified)

I stopped reading when I got to the games list. You have got to be kidding me, not a SINGLE ONE of those games are representative of current generation games and game consoles. Congratulations on the many hours all this useless testing must have taken. If you wanted real benchmarks you should have taken games like gta 5, far cry 4, witcher 3, and maybe ac:unity or watch dogs.

July 7, 2015 | 01:20 AM - Posted by Sebastian Peak

This was less about being current, and more about how these products might differ for the games and hardware that people have. I agree that the latest games are always preferred, but there are two problems.

First, I had to choose games with an automated benchmark tool, a necessity given the limitations of the testing - and for accuracy in testing this much hardware. There are always variances in manual run-throughs, and while slight I didn't want a single frame to be influenced by this.

The second problem is timing: all games were chosen in January when I began. Of the games you mentioned GTA 5 was not out for the PC until April 14, and Witcher 3 was released May 19.

July 7, 2015 | 03:20 AM - Posted by Joe (not verified)

Sebastian, I wanted to say thanks for your hard work on this project. I know it was a serious undertaking. The article was a good read also. Unfortunately you will not be able to satisfy everyone with everything. You explained the reason why you chose the games that you did, and people still complain.

Kudos to you for being professional when responding back to the people that are not happy or says you made some kind of mistake.
I look forward to any follow up info regarding this article in the near future.

July 7, 2015 | 01:34 AM - Posted by adogg23

U do realize this kind of project takes time don't u ? U cant test games that haven't been released yet. The article explains why these games were chosen.

July 7, 2015 | 03:13 AM - Posted by J Nev (not verified)

Why this article probably doesn't warrant that outburst, most of the video card reviews certainly do. PC Per are notorious for not having the latest games in the bench marking results.

Give it 3 years and they *may* add Witcher 3. They did add GTA5 though which is respectable.

July 7, 2015 | 06:45 AM - Posted by Relayer (not verified)

Couldn't you find a more expensive 290X? I don't even think the 8GB models cost that much. And using 6 month old drivers. lol

July 7, 2015 | 08:19 AM - Posted by pastuch

To me the most clear cut lesson to be learned from all of Sebastians exhaustive research is this:

Mantle is amazing and we needed to see DX12 YEARS ago. The difference in minimum frame rates on ALL CPUs between Mantle and non-mantle is staggering when looking at Civ BE and Sniper Elite. The minimum frame rates of most games is pathetic on DX11 for both AMD and Nvidia. It blows my mind that we can have games with 170fps Average frame rates dip down to 11fps at times. I hate DX11.

July 7, 2015 | 08:19 AM - Posted by pastuch

double post

July 7, 2015 | 08:45 AM - Posted by Shane (not verified)

I wonder if the discrepancies between the i7 and i5 can be contributed to poor cooling. Both are capable of dynamic processor speed control, and can 'overclock' there selves when needing more power. But also can run a little slower if experiencing to much heat. The i7 generates much more heat, and thus less likely to be 'overclocking' its self, or possibly not even running at full speed at all times.

Tis a well documented problem.

July 7, 2015 | 08:53 AM - Posted by brambyon

This article what i need, because a week ago i'm puzzled to upgrade my 720p TN to 1080p IPS monitor or my i3 dual-core to i5 quad-core cpu. And I just have 650 Ti, monitor upgrade is a better option.
Great article, thanks a lot.

July 7, 2015 | 12:54 PM - Posted by Sebastian Peak

Yep, with a 650 Ti I don't think the Core i3 would be a bottleneck at all. And thanks! I appreciate it.

July 7, 2015 | 09:23 AM - Posted by obababoy


Now you have to do this ALL over again with DX12! We all seriously appreciate the work!

I also want to note that you mention the 290X price is creeping up, but there are still a few for low $300's range, not to mention the R9 290 which is like 6% slower can be had for like $260 right now.

July 7, 2015 | 09:51 AM - Posted by Sebastian Peak

Thanks, and you're right - all of this would need to be updated for DX12, the latest drivers, and new hardware. Not to mention with games that actually use DX12 or are patched to do so, which will take time.

I based my comments on price creep on Amazon, where suddenly the cheapest R9 290X went up to $389. I know there are some deals out there but I don't think it helps that the 390X is priced over $400.

July 7, 2015 | 02:55 PM - Posted by brisa117

Bravo sir! I love reading these HUGE (applause again) articles comparing different components. It's one thing to read about an individual review or to compare specs across components, but these articles really put things in perspective (no pun intended). Keep up the good work!

P.S. You have great product photography skills (in your other reviews).

July 9, 2015 | 10:36 PM - Posted by Sebastian Peak

High praise! Thank you :) Feedback like this makes all of the work involved worth it.

July 7, 2015 | 04:34 PM - Posted by Anonymous (not verified)

Out of curiousity. How do you think an older yet still good enough platform like a 1366 i7 920 would have done?

July 9, 2015 | 08:14 PM - Posted by Sebastian Peak

This is a great question. You've inspired another story if I have time and resources for it!

July 20, 2015 | 10:55 AM - Posted by pastuch

You know what would be a cool investigation:

Most popular CPUS with modern GPUs. A lot of us are still running 920s, 2500K/2600k.

My roommate has crossfired 290s with an I7 920 and it works well. I'd love to see some numbers in a Directx 12/Windows 10 environment.

July 7, 2015 | 11:56 PM - Posted by Zabojnik

This is brilliant stuff. Thanks, Sebastian.

July 9, 2015 | 08:06 PM - Posted by Sebastian Peak

You're welcome!

July 8, 2015 | 10:04 PM - Posted by Thomas GX (not verified)

Rather super extensive review, you have given us Sebastian, i feel confident ina 750i card, giving me all the performance, i need, " which is not a super gamer " thanks.

July 9, 2015 | 08:14 PM - Posted by Sebastian Peak

Thank you! Yes, the 750 Ti is a great little card and can constantly be found for less than the $149 just about everywhere. And it doesn't require external power! Very good 1080p gamer all-around.

July 9, 2015 | 02:00 AM - Posted by FuegoVision (not verified)

Thank you. Great analysis. I think many wil find this a useful performance metric for cpu/gpu configurations. The game selection is rather sparse for my taste, but I know that would have made this project more complex and that is not the point. Again, well done.

July 9, 2015 | 08:10 PM - Posted by Sebastian Peak

I appreciate it.  I agree that there could always be a more representitive mix of games.  Even 6 was really pushing my limits with this, but I could certainly test more with a smaller selection of hardware down the road. It would be a sticking point to limit the hardware selection to one price-point, however... The middle ground would be the mid-range, so maybe a Core i5 vs the FX 8350, with a Radeon R9 380 and GeForce 960?

July 9, 2015 | 02:43 PM - Posted by spellstrike (not verified)

I would loved to have seen the q6600 processor be added in this lineup as it was the first popular mainstream quadcore processor to see where we have come from.

July 9, 2015 | 08:12 PM - Posted by Sebastian Peak

I think the next project might just have to be about whether or not it's necessary to replace an "aging" processor. There are quite a few Q6600 and i7 920 builds still out there, and maybe they're just fine for a new mid-range GPU?

July 10, 2015 | 09:26 AM - Posted by Anonymous (not verified)

What about MIN frame rate ?...I really hate the occasional frame drop when immersed in a game. My experience is the cheaper AMD CPU's can maintain a high average fps but have the odd dip (9-12 fps) that drive me nuts and takes me out of the gaming experience. Currently using an i7 3770k and never going back to the bargain CPU.

July 10, 2015 | 10:44 AM - Posted by Sebastian Peak

All benchmarks include the average frame rate and minimum frame rate.

July 10, 2015 | 10:52 AM - Posted by Anonymous (not verified)

Thank you so much! I'm really happy that you included the GTX 750 GPU results in your project/experiment/graphs. I am one of many who are really grateful, and will use these graphs to base a lot future purchases and considerations on the GPU's side and on the CPU's side (and then obviously it branches out to more components and considerations, etc)

This definitely will help me make up my mind on purchasing a GTX 960 or GTX 750. Very much appreciated Sebastian, you're a great person!

July 12, 2015 | 02:09 PM - Posted by Ultragamerblast (not verified)

I wish you had GTA5 benchmarks. But good work Sebastian !!!

July 13, 2015 | 12:07 AM - Posted by Sebastian Peak

It came out too late unfortunately. Next time!

July 13, 2015 | 02:34 PM - Posted by Keven Harvey (not verified)

I did a bit of crunshing with your numbers using your average @1080p and DX12 across all 6 games.
i3 + 290X -> 100,23 fps
8350 + 290X ->95,65 fps
i3 + 980 -> 105,35 fps
8350 + 980 -> 113,92 fps

Those are averages of the average fps for the 6 games. We can see that with the 290X, the i3 is a better processor while with the 980, the 8350 is better. Seems like single thread performance is more important with an AMD gpu and that it isn't optimal optimal to pair up amd cpu and gpu.

July 13, 2015 | 03:03 PM - Posted by Keven Harvey (not verified)

I meant DX instead of DX12 obviously, been talking about DX12 too much lately.

July 13, 2015 | 03:24 PM - Posted by Jerry Neutron (not verified)

Thank you for this.

July 13, 2015 | 03:44 PM - Posted by MAXXHEW

Q: How Much CPU Do You Really Need?

A: For gaming/streaming I feel like my FX-8350 is just about as much as I really need. I game on a single monitor at 1080p, with a SAPPhIRE R9-285 Dual-X OC. If I feel like I need to game at higher resolutions, or add more monitors... I have a long list of GPU upgrades that I could take advantage of.

All that said... my 11 year son uses a FX-8320E, with an ASUS R7-260X DirectCU II OC, on a single monitor at 1080p. He is very happy with his budget gaming pc because he knows/cares little about his hardware. When I game against him I get rekt every time, you know why?


July 13, 2015 | 03:45 PM - Posted by wizpig64 (not verified)

Thank you, Sebastian!

It would be interesting to see these data points plotted against the price of the setup, including motherboard and power supply of course.

July 14, 2015 | 12:44 PM - Posted by Sebastian Peak

Yes it would. There are a lot of things that can still be done with all of this data :)

July 13, 2015 | 07:48 PM - Posted by Mav'Erik

This is the kind of content I love, GREAT JOB! THANKS!!
In my personal "real world" situation I've been unable, until now, to decide if my OLD i7860 2.8GHz (3.46GHz when Turbos up) is bottle necking a recently purchased [and reasonably priced] GTX 770 with a 1080p 144Hz monitor. I finally have an answer (I think)!
If I'm extrapolating from your data correctly, I need not go beyond a Z97 MB w/ 4440 (or better) CPU and last gen DDR3 memory to get significant increases in FPS, due mainly to better CPU speed. And I'll then have a new machine that is likely to upgrade acceptably as prices on better GPUs come down to more affordable levels!
Or I might just wait for Skylake and overpurchase... *_*

July 13, 2015 | 08:21 PM - Posted by db87

I hate to ruin the party but this review is already outdated by the drivers. AMD made huge gains in overhead compared to last year Catalyst Omega. Yes huge difference!

Even the latest beta 1023.7 compared to Catalyst 15.7 is +40% with 3DMark overhead feature test. These drivers are just a few weeks away from each other...

July 13, 2015 | 08:37 PM - Posted by db87

Some very important games are left out. The latest games are very demanding. You need to look for the worst case scenario, games that are challenging. GTA5, Witcher 3, Far Cry 4, Flight Simulator X (mainly single threaded, still madness today)

I never would have chosen game like Bioshock Infinite, since it's a Unreal 3 engine game you know that even a Core 2 Duo would do well probably...

July 14, 2015 | 03:25 AM - Posted by Ray (not verified)

what were the clock speed of the cpu's?

July 14, 2015 | 12:40 PM - Posted by Sebastian Peak

Everything was run at stock speeds.

For AMD the X4 860K runs at 3.7 GHz with a 4 GHz boost, the FX 8350 has a 4 GHz base and 4.2 GHz boost, and the FX 9590 has a 4.7 GHz base and 5 GHz boost.

On the Intel side the Core i3-4130 runs at 3.4 GHz, the i5-4440 has a base of 3.1 GHz with 3.3 GHz turbo, and the i7-4790K has a 4 GHz base and 4.4 GHz turbo.

July 14, 2015 | 03:42 AM - Posted by Anonymous (not verified)

Great work.

This will be a reference for months to come. Very comprehensive and informative.

AMD needs to work on IPC. I hope Zen has good memory controllers paired with the promised 40% increase in IPC even though it's too little to late.

These results convinced me to make the jump from FX-6300 to 4690k and I am so happy I did.

July 14, 2015 | 05:01 AM - Posted by Anonymous (not verified)

Why bother using core i3 4130 when a 200MHz faster core i3 4160 cost exactly the same. Check newegg prices

July 14, 2015 | 12:41 PM - Posted by Sebastian Peak

You're right, the Core i3 prices are pretty flat so it makes sense to buy the faster model, obviously. I used the i3-4130 because I already owned one.

July 14, 2015 | 12:06 PM - Posted by John Crouse (not verified)

I wish just 1 site would put prepar3d or FSX in their benchmark testing so people in the simulation community had a way to measure the hardware they need to get as it is completely different than every other gaming platform out there. Think about please

July 15, 2015 | 12:01 PM - Posted by Dom (not verified)

Will you be adding price/performance charts? If not do you have the benchmark data in excel so i can put the charts together myself.

July 16, 2015 | 08:30 AM - Posted by DiceAir (not verified)

I say with dx12 your cpu will become even more of a factor

July 16, 2015 | 10:36 AM - Posted by Aysell

As usual i wa smentioned the benchmarks were done with any other process closed.

But a lot of the time we have a crap tone of shet open in the background: browser (with god knows how many tabs), music player, audio player, voice chat, game platform (steam origin etc), a download or two and so on.

Also no benchmarks for something like a mmo/rts. Kinda hard to pull of but those are the games the processor will cry in.

Awesome project, should give more people an understanding of balancing a system, but i'd love some comparisons with a real life situation.

July 17, 2015 | 03:05 AM - Posted by flanker (not verified)

Awesome test guys, very interesting results sometimes :).

July 18, 2015 | 03:59 PM - Posted by cashmere_7

This review has saved me a lot of money, I was going to systen change to intel and retire my 8350 that i pair with my Zotac Gtx 980 amp Edition. The i5 was kicking it`s ass, but I plan to start only gaming at 1440.

the 980 is more than up to the job and as this review proves when you crank up the resolution to 1440 the FX8350 battles back and gives me no reason to go intel.

July 25, 2015 | 10:38 AM - Posted by Anonymous (not verified)

This is so good it should be permanently linked to the website homepage.

Well done sir!

August 8, 2015 | 03:17 PM - Posted by TehCaucasianAsian (not verified)

These results are amazing, and also surprising to me. Going up in resolution, in my mind I figured more CPU horsepower would be a necessity. But it seems most modern GPUs are still the bottleneck here, making stronger CPUs less of a necessity at higher than 1080P. It's very interesting data, and I thank you for taking so much of your time to collect all of it. Keep it up :)

November 26, 2015 | 01:06 PM - Posted by Anonymous (not verified)

I am still on a C2Q 9550 LOL
I understand that a midrange / low-end card will give better results I f I move to a modern CPU.
Working on it.

November 29, 2015 | 05:18 AM - Posted by Anonymous (not verified)

Good read ! thank you for the effort put in this :)

November 30, 2015 | 12:54 PM - Posted by blue2kid3 (not verified)

omg that i3 is the best value per $$ on a budget build if you want to upgrade in the future you can get an i3 for like $80 and upgrade to the i5 k ver in the future if you want.

Thanks for the hardword guys perfect first post to read from you guys very helpful.

December 21, 2015 | 06:36 PM - Posted by Hristin Marinov (not verified)

Hey there, I really appreciate this big review, but I'm still confused on bonding certain GPUS and CPUs toghether. SO if it's not much of a bother, can anyone please give me an opinion?
Q: I'm building a 900p budget gaming rig, so far I have the R9 270 ( could upgrade later this summer to a GTX 960/R9 380) and I'm having a tough time choosing a CPU. SO the main problem is I don't wanna spend too much money (here it is too expensive. So the Question is: Does it HAVE to be an intel i5? Could I be able to play new games for the next 2~3 years (High settings not obligatory) with just an i3? (BTW I also have an open mind to AMD CPUs as well) I reall need an opinion ( or three), please!

January 17, 2016 | 10:41 AM - Posted by Wilfredo Fornes (not verified)

Hello, very nice review, I was wondering if the gap between fx 8350 and 4790k it is shorter in 4K gaming. I have a 4790k with a 980 sli. And I am thinking to sell my cpu and mainboard and get a fx 8350 and a asus formula z. Will notice a decrease of fps at 4K..?? The idea is with the extra money upgrade my 980 sli to a 980 ti sli. Thanks

February 1, 2016 | 09:58 AM - Posted by sprsk (not verified)

OMG ... excellent review .... !!!
i really hope and with u do the same for 2016 with DirectX12

March 31, 2016 | 08:05 AM - Posted by Anonymous (not verified)

i wanna ask question all processor can use in laptop

March 31, 2016 | 08:08 AM - Posted by Anonymous (not verified)

pls i wanna know this infomration about my proccesor laptop which proccesor i can use to replace with this proccesor

March 31, 2016 | 08:09 AM - Posted by Anonymous (not verified)

Description: Socket: FT3
Clockspeed: 1.4 GHz
No of Cores: 2
Max TDP: 15 W

Other names: AMD E1-2500 APU with Radeon(TM) HD Graphics
CPU First Seen on Charts: Q2 2013

April 3, 2016 | 04:40 PM - Posted by Marta Heideman (not verified)

Very nice post. I want to share what i bought a month ago and this setup is really working great, I regulary play GTA V, MECHWARRIOR ONLINE, DOTA 2, LEFT4DEAD, AFTERMATH, THE DIVISION, Second Life too :D
And by stock everyone i play is at max settings full hd.

CPU: AMD FX 6300
HD 0: SAMSUNG EVO 850 250 Gb
SCREEN: ACER V226HQL ( 22" )

900€ +-


Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.