Review Index:
Feedback

AMD A8-7600 Kaveri APU and R7 250 Dual Graphics Testing - Pacing is Fixed!

Author: Ryan Shrout
Manufacturer: AMD

Frame Rating Info

Testing Configuration

Here is the pricing breakdown of our entire system build.

  AMD Kaveri Dual Graphics Test Setup
Processor AMD A8-7600 Kaveri APU - $139
Heatsink Noctua NH-L9A - $44
Motherboard Asrock FM2A88X-ITX+ - $145
Memory AMD Memory 8GB DDR3-2400 - $121
Graphics Card MSI R7 250 OC Edition 2GB DDR3 - $94
Storage Samsung 840 EVO 250GB - $169
Total Price $712

View Full Size

The A8-7600 isn't really recognized by GPU-Z quite yet...

View Full Size

MSI Radeon R7 250 OC Edition

What you should be watching for

  1. A8-7600 vs R7 250 - How does the integrated graphics performance of the A8-7600 APU compare to the discrete performance of the R7 250?
     
  2. R7 250 vs A8-7600 + R7 250 - How much better is performance when we enable Dual Graphics with both the APU and discrete GPU at work?
     
  3. Frame Pacing enabled vs Frame Pacing disabled - How much has AMD really improved the frame pacing support with the Catalyst 13.35 beta driver?

 

Frame Rating: Our Testing Process

If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online.  Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more. 

This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options.  So please, read up on the full discussion about our Frame Rating methods before moving forward!!

While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own. 

If you don't need the example graphs and explanations below, you can jump straight to the benchmark results now!!

 

The PCPER FRAPS File

While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers.  The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph.  This will basically emulate the data we have been showing you for the past several years.

 

The PCPER Observed FPS File

This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above.  This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences. 

As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.

 

The PLOT File

The primary file that is generated from the extracted data is a plot of calculated frame times including runts.  The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer.  A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.

 

The RUN File

While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result.  It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience. 

For tests that show no runts or drops, the data is pretty clean.  This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.

A test that does have runts and drops will look much different.  The black bar labeled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does.  Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.

The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen.  The larger the area of yellow the more often those runts are appearing.

Finally, the blue line is the measured FPS over each second after removing the runts and drops.  We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.

 

The PERcentile File

Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling.  In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS.  This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run.  The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected. 

The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time.  A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.

 

The PCPER Frame Time Variance File

Of all the data we are presenting, this is probably the one that needs the most discussion.  In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected.  As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels.  Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?

We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer.  However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT.  To be more specific, stutter is only perceived when there is a break from the previous animation frame rates. 

Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames.  Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter.  Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.

While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions.  So much in fact that I am going to going this data the PCPER ISU, which beer fans will appreciate the acronym of International Stutter Units.

To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames.  There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene.  What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.

January 29, 2014 | 07:51 AM - Posted by Nightowl (not verified)

Sorry to say Ryan, but this fix came to little to late, for me to stay with AMD/ATI this Frame Pacing fix should have been in Catalyst 13.12. I'm still having the infamous frame pacing trouble with Skyrim, and odd flickering in my games with a pair of 2GB Radeon 7850's and this includes Diablo 3, which is widely known on the Diablo 3 forums to bring down many a high end system. It has reached a point, where I'm jumping ship and getting a pair of 2GB GeForce GTX760's and I'm picking up at the same time a cheap GT610 for Physx processing.

AMD also needs to get back in the game of putting out WHQL drivers on a monthly or bimonthly, non of this quarterly garbage that they are doing, all putting out drivers quarterly does is hurts them in the long run.

January 29, 2014 | 08:53 AM - Posted by yasamoka (not verified)

The GT610 will bottleneck 760s in SLi if used as a PhysX card. It's weak for PhysX processing.

January 29, 2014 | 09:23 AM - Posted by StewartGraham (not verified)

Let your processor handle PhysX, otherwise you're just unnecessarily dividing your PCI-E lanes, unless you have a -E SKU with 40 available lanes.

January 29, 2014 | 03:26 PM - Posted by Nightowl (not verified)

Nope, my ASUS Z87 Deluxe motherboard has a PCI Express bridge chip on it, that provides addtional PCI Express lanes to the PCI Express X1 slots and the last PCI Express X16 slot. This is why I can get away with adding in a card such as a GT610 for Physx processing. The PCI Express bridge chip is one of the reasons I chose the ASUS Z87 Deluxe motherboard when I did my upgrade in August of 2013.

Most Z87 motherboards do not have a PCI Express bridge chip on it, to provide those addtional PCI Express lanes/bandwidth. I've personally looked at the various Z87 chipset boards, and most of them do indeed do not have a PCI Express bridge chip to provide extra PCI Express lanes/bandwidth on them, otherwise the only way to get more PCI Express bandwidth is to go with an E series chip or an X79 board and cpu.

January 30, 2014 | 04:59 AM - Posted by jackalopeater (not verified)

I can PROMISE you that you are wasting money on a GT610 as a PPU. IF you must buy one get at least a GT640. But you're not going to see any benefit, I've run the tests multiple times and very few times does it do you any good.

February 5, 2014 | 10:03 AM - Posted by Wendigo (not verified)

In fact a GT 610 is a PhysX "decelerator".

I test various configs with a second nvidia card for PhysX, and for graphic cards like a GTX 670 (one) you need at least a GT 640 GDDR5. For two cards (and more raw processing) you need GTX 650 Ti or better card.

As example, with a GTX 560 Ti for 3D, if you have a second one for PhysX like a 8800GTS/GT, you have worse framerates than with the GTX 560 Ti alone, for 3D and PhysX.

January 29, 2014 | 11:35 AM - Posted by JohnGR (not verified)

Jumping ship just before seeing what Mantle can do is something that probably you will regret later. Also as others said if you buy two 760's you don't need a separate card for PhysX. If you do buy a third card think 640 as a minimum.

January 29, 2014 | 08:07 AM - Posted by 7stars

Sorry to say "Nightowl", but the truth is that Skyrim is OLD. Directx 9 is OLD, and Crossfire is not really required for a DX9 title...so you could play that with one card only. You should look forward, not backward ;-)
and about WHQL, no one should feel this need...'cause the process to certify the drivers is too long and neither AMD nor we can waste so much time... beta drivers are often stable and are more than enough

January 29, 2014 | 08:08 AM - Posted by Jack P (not verified)

It's sad to see that these APUs still can't run BF3 decently.

I was very disappointed when my A8 3870K wouldn't run BF3 at anything more than like 1024x768 on low

January 29, 2014 | 08:29 AM - Posted by Asterox (not verified)

Sorry but you have no idea how to adjust the settings/ resolution, in some game depending on your graphics card raw performance;)

- AMD LIano APU A4-3400 in Battelfield 3

http://youtu.be/qO1wg2jgGos

January 29, 2014 | 08:09 AM - Posted by Russ (not verified)

Well you've got my hopes up on crossfire with eyefinity. Second 7970 been disconnected for months now would like to be able to make use of it as per as advertised rather than as a dead weight or space heater.

January 29, 2014 | 08:27 AM - Posted by Nilbog

I'm sorry, but Frame Pacing is not fixed. Shame on AMD for trying to weasel out of supporting DX9. The age of DX9 doesn't matter. There are still plenty of relevant and fun games that exist today using DX9 only.

The complete bullshit claim that Crossfire isn't needed for a DX9 title, is just that.

Is that so crazy that peoples favorite games are still running DX9? Is that so crazy that people want to use more than one GPU with a DX9 title? Maybe people don't want part of their investment being useless, or hindering their experience, while playing their favorite game.

So if your favorite game is still running DX9, and you plan on buying another GPU someday. AMD just kicked you over to NVIDIA, and they are happy to accommodate you.

My favorite game, that i play every day. Is using DX9, they have not hinted at updating the API anytime in the near future. What am i supposed to do about that?

January 30, 2014 | 02:48 AM - Posted by Anonymous (not verified)

There is a sneaky way to make things work in DirectX 9 and an APU with dual graphics...

I've been messing about with various settings and have discovered how to get the most out of your dual-graphics with Directx 9 (and previous games).
First you must create a profile in Catalyst Control Centre for the games executable file, not the launcher it has to be the actual game (e.g not the Fallout launcher but Fallout.exe itself).
In the profile you need to change the "AMD Radeon Dual Graphics" option (at the bottom) for the executable to "AFR friendly" as this will force your computer to use the APU and GPU to render alternative frames and therefore make use of the dual-graphics system.

NOTE: YOU MUST ALSO ENABLE OR FORCE V-SYNC OR YOU WILL SUFFER THE MOST EXTREME SCREEN TEARING YOU HAVE EVER SEEN!

NOTE FOR WINDOWS 8.1 USERS: Most games won't work with v-sync whether you force it via Catalyst Control Centre or enable it in-game. This is purely to do with the lack of updated drivers for Windows 8.1. To get around this simply run the game executable (not the launcher) in Windows 8 (or 7) compatability mode and you will find v-sync works fine.

Obviously I haven't tried this method with every single game ever released but every game I have tried it with (I've tried at least 25 so far) has shown a noticable improvement on performance (e.g usually frames per second or the ability to put anti-aliasing up another notch without the mouse getting floaty)

Hmm, always makes me wonder why they don't just enable it without having to make me run through hoops...probably some deal they have with MS in order to push people over to new operating systems.

January 29, 2014 | 09:06 AM - Posted by Patrick3D (not verified)

I've been enjoying games with a A10-7850 paired with a R7 250 even without the 13.35 driver, looking forward to it releasing this week though. Dual Graphics works with the 13.30 driver and the benchmarks I've run this far show significant performance improvements with it enabled.

My experience thus far has been that this combination handles any game with Medium/High settings at 1080P and High/Ultra settings at 720P (depends on the intensity of the game). The big killer of performance is tesselation, that knocks off 10+FPS itself, oh and TressFX (good hair = bad framerate).

February 4, 2014 | 10:20 AM - Posted by Mark van Engelen (not verified)

Hi,

I'm running the same set up as you, but I can't get dual graphics to work... Which asrock bios version are you running? I seem to be missing the dual graphics option in the uefi.

February 5, 2014 | 06:27 AM - Posted by KevTheGuy (not verified)

You can enable dual graphics in the Catalyst Control Center.

January 29, 2014 | 09:28 AM - Posted by Ryan D. (not verified)

Still to little to late and most everybody I know in the PC Gaming scene is on Nvidia and Intel enjoying the great gaming experiences.

January 29, 2014 | 09:50 AM - Posted by Anonymous (not verified)

Sorry, I posted this originally in wrong thread.

Ryan, why was this test not done with the current flagship A10-7850 apu?

While this is good news for AMD, I doubt gamers are going to choose this route (buying more expensive RAM) with that low of a discrete GPU.

Better to just buy a tier or 2 higher on discrete with that money.

You could easily build a rig at same price point with an AMD Vishera paired with a 650ti boost or used 660ti and smash those Kaveri bencharks and not have to deal with frame pacing.

Also pricing a rig without OS, case, and psu is totally inaccurate and I wish you'd stop doing that.

January 29, 2014 | 11:27 AM - Posted by Ryan Shrout

At the time of testing I did not have the 7850K.

January 29, 2014 | 11:04 AM - Posted by snook

dear lord all is lost in the comment section. read more and enjoy this article for what it is. two people out of eleven that have a clue?

January 29, 2014 | 12:06 PM - Posted by Curious (not verified)

Although AMD felt the R7 250 DDR3 was the "sweet spot", can you tell me why? DDR5 is always better at the same price, isn't it?
Is it necessary to match the number of SIMD units? As you pointed out, the units in the card are 50% faster. But if you do have to match units, why isn't there a 250X with 512 units to mate with the flagship chip and the future A10-7800?
To all that are downplaying DG because you can step up to a faster card at the same price, keep in mind you are forced to use a larger case and use a bigger PSU with more cost, heat, etc. There is at least one R7 250 Low Profile that requires no PSU connector and probably would run fine with a 65W processor and SSD on my bundled, slim case 300W PSU.

January 29, 2014 | 12:48 PM - Posted by Anonymous (not verified)

My guess - ddr3 is preferred for the graphics card because that is what the a8-7600 is using. Crossfire usually works best when the individual units are roughly equal.

January 29, 2014 | 12:46 PM - Posted by icebug

I like reading these reviews and was waiting for the A8-7600 to come out so I could use it in my new LAN rig. Too bad AMD still hasn't released the real star price/performance part for this gen. I settled for a 760k that I will couple with my old 5850. I was looking forward to see how devs would have used the "compute cores" on this in the future, guess ill have to wit for the next gen...

January 29, 2014 | 02:29 PM - Posted by HikingMike (not verified)

Thanks for the review! Well-written and has the useful graph data we need. I've been hoping to hear about progress on this.

January 29, 2014 | 02:42 PM - Posted by wow&wow (not verified)

"Higher frame rates don't really mean much if half of those frames are thrown away, not visible, and not affecting user experience."

The question is why the benchmark tool doesn't take user experience into account. It reminds me that Japnese speaker systems look good on frequency response curves but don't sound good.

January 29, 2014 | 06:10 PM - Posted by Anonymous (not verified)

hey, wondering when you are going to hold intel's feet to the fire for their abysmal 'frame latency' performance? Or are you just going to continue to let them slide under the radar? Nice standards PCPER

January 29, 2014 | 09:26 PM - Posted by Nightowl (not verified)

The other thing that ticked me off to no end, is that most of the time in a Crossfire configuration is that damned ULPS feature would keep the second 7850 completely shut until there was a lot of action going on screen in a game. Case and point, the second 7850 would not power up in games like Diablo 3, C&C 3, unless there was a lot of action going on with various enemies on screen trying to kill me all at once. In Skyrim, the game was smooth until there was lag spikes, and then I alt tabbed out of the game and had a look at the ATI properties and guess what, most of the time the second 7850 wasn't even running. There's no excuse for this kind of garbage from AMD/ATI and this all started with the HD 6000 series of graphics chips and continues on in to the R200 series of chips, as far as I know. I wish this was a joke, but if one does a search using a search engine such as Google, one will find all over the internet that ULPS is one big royal pain in the butt in a Crossfire configuration.

The only way AMD cards are truly going to work decent in a Crossfire configuration in my opinion, is if and when AMD gets the lead out of their collective butts, and puts out a bios update for their past cards which includes the 6000 series cards, the 7000 series cards, and the R200 series cards that completely shuts off ULPS. This will most likely never happen.

I'm starting to wonder if and when this site will take AMD to task over the ULPS issues that people have been reporting on various forums all over the internet for some time, even on AMD's own forums, but AMD has done nothing about.

These ULPS issues, are the other reason I'm bailing out on AMD, I've had enough ULPS shutting off the second 7850 on me when I'm trying to get a decent gaming session in.

January 30, 2014 | 06:24 AM - Posted by Anonymous (not verified)

It's great to hear that AMD is getting its APU and descrete GPU graphics matching fixed, but AMD should also work towards getting its APUs working with its Highend Descrete GPUs, even if the APU graphics is only used for gaming engine Physics, and other GPGPU type acceleration! if Kaveri can use its integrated GPU along side the APU's CPU, and produce a core i5 level of computational perfomence, and pair with a high end AMD descrete GPU(doing the graphics) then price wise AMD will be able to compete for the mid range desktop market dollars.

January 30, 2014 | 08:42 AM - Posted by Anonymous (not verified)

The problem is that a user with a higher end GPU wouldn't be buying a Kaveri part.

They'd probably be rocking an FX chip.

January 30, 2014 | 10:28 AM - Posted by Anonymous (not verified)

Well yes, if the AMD GPU was the top end, but AMD APU's integrated graphics need to be paired with the equivalent AMD descrete GPUs (of equal graphics power), someone needing more power than the R7 250, but not the top end AMD discrete SKU, would still be able to benifit from Kaveri's HSA use of the builtin GPU for extra compute, and extra decoding boost for HTPC, etc, and mid range gaming, where the i5 is strong, and the Intel GPU just takes up space, as there are no intel drivers to be able to utilize their intigrated GPUs for other GPGPU loads while using a descrete AMD or Nvidia GPU for gamingor other uses.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.