Review Index:
Feedback

MSI GeForce GTX 780 3GB Lightning Graphics Card Review

Author: Ryan Shrout
Manufacturer: MSI

Testing Setup and Frame Rating Info

Testing Configuration

The specifications for our testing system haven't changed much.

Test System Setup
CPU Intel Core i7-3960X Sandy Bridge-E
Motherboard ASUS P9X79 Deluxe
Memory Corsair Dominator DDR3-1600 16GB
Hard Drive OCZ Agility 4 256GB SSD
Sound Card On-board
Graphics Card MSI GeForce GTX 780 Lightning 3GB
AMD Radeon HD 7970 GHz Edition 3GB
NVIDIA GeForce GTX 780 3GB
NVIDIA GeForce GTX TITAN 6GB
Graphics Drivers AMD: 13.8 beta
NVIDIA: 326.80 beta
Power Supply Corsair AX1200i
Operating System Windows 8 Pro x64

View Full Size

What you should be watching for

  1. MSI GTX 780 Lightning vs GTX 780 Reference - Does the overclocked speed improve performance?  What about temperatures, sound levels and power consumption?
  2. MSI GTX 780 Lightning vs GTX TITAN Reference - Can this overclocked, lower core count GK110 GPU compete with the more expensive GTX TITAN?
  3. MSI GTX 780 Lightning vs HD 7970 GHz Edition - AMD's top single GPU card keeps coming down in cost.  How does it stand up against the higher priced MSI GTX 780 Lightning?

 

Frame Rating: Our Testing Process

If you aren't familiar with it, you should probably do a little research into our testing methodology as it is quite different than others you may see online.  Rather than using FRAPS to measure frame rates or frame times, we are using an secondary PC to capture the output from the tested graphics card directly and then use post processing on the resulting video to determine frame rates, frame times, frame variance and much more. 

This amount of data can be pretty confusing if you attempting to read it without proper background, but I strongly believe that the results we present paint a much more thorough picture of performance than other options.  So please, read up on the full discussion about our Frame Rating methods before moving forward!!

While there are literally dozens of file created for each “run” of benchmarks, there are several resulting graphs that FCAT produces, as well as several more that we are generating with additional code of our own. 

If you don't need the example graphs and explanations below, you can jump straight to the benchmark results now!!

 

The PCPER FRAPS File

While the graphs above are produced by the default version of the scripts from NVIDIA, I have modified and added to them in a few ways to produce additional data for our readers.  The first file shows a sub-set of the data from the RUN file above, the average frame rate over time as defined by FRAPS, though we are combining all of the GPUs we are comparing into a single graph.  This will basically emulate the data we have been showing you for the past several years.

 

The PCPER Observed FPS File

This graph takes a different subset of data points and plots them similarly to the FRAPS file above, but this time we are look at the “observed” average frame rates, shown previously as the blue bars in the RUN file above.  This takes out the dropped and runts frames, giving you the performance metrics that actually matter – how many frames are being shown to the gamer to improve the animation sequences. 

As you’ll see in our full results on the coming pages, seeing a big difference between the FRAPS FPS graphic and the Observed FPS will indicate cases where it is likely the gamer is not getting the full benefit of the hardware investment in their PC.

 

The PLOT File

The primary file that is generated from the extracted data is a plot of calculated frame times including runts.  The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer.  A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is likely caused by a lot of runts being displayed.

 

The RUN File

While the two graphs above show combined results for a set of cards being compared, the RUN file will show you the results from a single card on that particular result.  It is in this graph that you can see interesting data about runts, drops, average frame rate and the actual frame rate of your gaming experience. 

For tests that show no runts or drops, the data is pretty clean.  This is the standard frame rate per second over a span of time graph that has become the standard for performance evaluation on graphics cards.

A test that does have runts and drops will look much different.  The black bar labelled FRAPS indicates the average frame rate over time that traditional testing would show if you counted the drops and runts in the equation – as FRAPS FPS measurement does.  Any area in red is a dropped frame – the wider the amount of red you see, the more colored bars from our overlay were missing in the captured video file, indicating the gamer never saw those frames in any form.

The wide yellow area is the representation of runts, the thin bands of color in our captured video, that we have determined do not add to the animation of the image on the screen.  The larger the area of yellow the more often those runts are appearing.

Finally, the blue line is the measured FPS over each second after removing the runts and drops.  We are going to be calling this metric the “observed frame rate” as it measures the actual speed of the animation that the gamer experiences.

 

The PERcentile File

Scott introduced the idea of frame time percentiles months ago but now that we have some different data using direct capture as opposed to FRAPS, the results might be even more telling.  In this case, FCAT is showing percentiles not by frame time but instead by instantaneous FPS.  This will tell you the minimum frame rate that will appear on the screen at any given percent of time during our benchmark run.  The 50th percentile should be very close to the average total frame rate of the benchmark but as we creep closer to the 100% we see how the frame rate will be affected. 

The closer this line is to being perfectly flat the better as that would mean we are running at a constant frame rate the entire time.  A steep decline on the right hand side tells us that frame times are varying more and more frequently and might indicate potential stutter in the animation.

 

The PCPER Frame Time Variance File

Of all the data we are presenting, this is probably the one that needs the most discussion.  In an attempt to create a new metric for gaming and graphics performance, I wanted to try to find a way to define stutter based on the data sets we had collected.  As I mentioned earlier, we can define a single stutter as a variance level between t_game and t_display. This variance can be introduced in t_game, t_display, or on both levels.  Since we can currently only reliably test the t_display rate, how can we create a definition of stutter that makes sense and that can be applied across multiple games and platforms?

We define a single frame variance as the difference between the current frame time and the previous frame time – how consistent the two frames presented to the gamer.  However, as I found in my testing plotting the value of this frame variance is nearly a perfect match to the data presented by the minimum FPS (PER) file created by FCAT.  To be more specific, stutter is only perceived when there is a break from the previous animation frame rates. 

Our current running theory for a stutter evaluation is this: find the current frame time variance by comparing the current frame time to the running average of the frame times of the previous 20 frames.  Then, by sorting these frame times and plotting them in a percentile form we can get an interesting look at potential stutter.  Comparing the frame times to a running average rather than just to the previous frame should prevent potential problems from legitimate performance peaks or valleys found when moving from a highly compute intensive scene to a lower one.

While we are still trying to figure out if this is the best way to visualize stutter in a game, we have seen enough evidence in our game play testing and by comparing the above graphic to other data generated through our Frame rating system to be reasonably confident in our assertions.  So much in fact that I am going to going this data the PCPER ISU, which beer fans will appreciate the acronym of International Stutter Units.

To compare these results you want to see a line that is as close the 0ms mark as possible indicating very little frame rate variance when compared to a running average of previous frames.  There will be some inevitable incline as we reach the 90+ percentile but that is expected with any game play sequence that varies from scene to scene.  What we do not want to see is a sharper line up that would indicate higher frame variance (ISU) and could be an indication that the game sees microstuttering and hitching problems.

August 27, 2013 | 07:28 PM - Posted by Johnny Rook (not verified)

Cool card but 1173Mhz core boost overclock "only"? C'mom, my reference PNY GTX780 overclocks stable 1173MHz without any overvoltage... I was expecting a lot more of this Lightning...

August 27, 2013 | 08:04 PM - Posted by Ryan Shrout

Check the update on the last page - actually running at 1241 MHz.

August 30, 2013 | 05:04 AM - Posted by Wendigo (not verified)

ALL of the kepler cards have a Max Boost different to the official Boost. It isn´t only about MSI Lightning. Everyone card has a unique Max Boost, different to the official boost of the model and other units of it.

ALL of the kepler cards reach upper clocks than the official boost that GPU-z shows.

August 27, 2013 | 07:30 PM - Posted by Mr Marauder (not verified)

The GPU reactor can be taken off without any impact on performance, as it only starts to shine when watercooled or LN2 cooled.

August 27, 2013 | 08:04 PM - Posted by Ryan Shrout

Ah cool, thanks for that!  I'll try removing it tomorrow.

August 27, 2013 | 08:47 PM - Posted by tabuburn (not verified)

You should give EVGA a call and get a sample of their GTX 780 Classified and compare it to this since both tend to trade blows.

August 28, 2013 | 12:20 AM - Posted by capawesome9870

i just got an idea on how to improve the graphs you give out with these Frame Rating (although they are very interesting to me being an AMD user).

the idea is to give 'highlight' of the performance and actually give the number of the observed FPS 2-5 times per benchmark, just the observed because that is the one that matters.

another thing could you make the graphs stretched a little by cutting out the bottom portion of the graph. example: if all the FPS is above 20, why show the graph part under 15. this would be to give a little more clarity when just looking at the graphs in the article with out clicking on a particular graph to get the bigger picture of it.

August 28, 2013 | 05:33 AM - Posted by Ryan Shrout

Thanks for the ideas.  I'll play around with trying to make them more readable in the coming weeks!

August 29, 2013 | 12:07 AM - Posted by capawesome9870

cool: thanks Ryan. Keep up the good work.

September 5, 2013 | 07:19 AM - Posted by Mountainlifter

Hi Ryan, I was wondering if you could include an average line on the observed FPS graph or state the average. Like on this graph.
https://dl.dropboxusercontent.com/u/2371050/TITAN_BENCHES/MPtest.png

Or is there a reason why it is left out?
It makes it easier to compare with what other review sites report.

August 28, 2013 | 12:33 AM - Posted by Anonymous (not verified)

This card is design for overclocking, like the clasified you should be able to overvolt the core to 1.35 volt safely and probably hit 1300 to 1400 on the core. It is a bit pointles to test the card without any overvolting because the limitation of Nvidia of 1.21v will let the card perform like any other....

August 28, 2013 | 12:50 AM - Posted by OCN.net not impressed (not verified)

overclock the dam thing and then people might care about this panzy review.

oc max vs calssified oc max vs gtx titan oc max at 1.35v. you won't because that's way above your skillset as usual ryan.

August 28, 2013 | 04:56 AM - Posted by Ian (not verified)

And he overclocked it to a level the typical gamer will use on this card.

Not everyone wants to massively clock their card just to see high numbers pop up on a screen once a graphics test is done because they lack any real sense of achievement in life that they feel great about dialing some numbers into an electronic device to make it run faster. Most people don't particularly care about that they just want a little bit of extra power and safe temps while gaming.

Yes this card is for overclocking and he overclocked it but the option is there should you wish to push it further. This doesn't mean a standard consumer who has no interest in number chasing cannot purchase this card for their own use to play games on a Titan level for less money.

Is it possible you are concerned as to what the new Lightning can achieve?

My Galaxy HOF is completely stable at 1293mhz on the core under gaming and can push 1330mhz for synthetic benchmarks. Lets hope the Lightning can at least exceed these. I myself am however just a normal gamer and not big into overclocking because its not my main focus to number chase. I have more interesting things to do with my time.

August 28, 2013 | 05:35 AM - Posted by Ryan Shrout

First, thanks for stopping by.  :)

Second, max OC vs max OC is probably one of the worst indicators for basing a conclusion of a graphics card on.  It's important, but can't be the only or even primary concern.  Why?  VARIABILITY.

Each card sample is going to be different on the Lightnings, each is going to be different on the Classified, and the difference between the lines is likely going to vary even more.  I can overclock this card further (and I will I'm sure once I get the updated BIOS from MSI) but even if I get 50 MHz additional clock doesn't really mean YOU or any buyer will.  

Now if MSI were to send me a dozen cards for each review we did, you'd have a better argument.

Also, go fuck yourself.

August 28, 2013 | 08:56 AM - Posted by Thedarklord

Great response, lol, you rock dude.

August 30, 2013 | 08:58 AM - Posted by snook

way to poop on him.

August 28, 2013 | 08:49 AM - Posted by mLocke

This may help you. http://youtu.be/aWVywhzuHnQ

August 28, 2013 | 05:41 AM - Posted by Anonymous (not verified)

On the first comparative graphs, you reference the 680 and the 680 sli, but not the 780?

August 28, 2013 | 05:45 AM - Posted by Ryan Shrout

If you are looking at THIS page then you are just seeing our explanation of the TYPE of graphs you are going to see, not the specific results for this review.

http://www.pcper.com/reviews/Graphics-Cards/MSI-GeForce-GTX-780-3GB-Ligh...

August 28, 2013 | 06:20 AM - Posted by !Shocked! (not verified)

Did you JUST NOW figure out that GPUz doesnt show the ACTUAL boost clock? Wow guys... I love your reviews but holy cow, pull it together!

August 28, 2013 | 06:24 AM - Posted by !Shocked! (not verified)

So does this mean all your past reviews with boost clocks will need adjusting to show the actual clocks reached?

ANother question, did it actually HOLD that clock with the 109% limit or was it all over the place? I ask that as I am not sure if PEAK values are what should be reported. I am personally torn.

I know that 109% limit was hit VERY easily.

August 28, 2013 | 08:14 AM - Posted by Ryan Shrout

It held that based on the logs, yes.  We are awaiting another BIOS from MSI that will move the 109% slider a bit further...

August 28, 2013 | 06:40 AM - Posted by ProxYO (not verified)

EVGA's Classi vs Galaxy's HOF vs MSI's Lightning

plz&ty ;D

August 28, 2013 | 09:05 PM - Posted by tabuburn (not verified)

I second this.

August 28, 2013 | 07:36 AM - Posted by !Shocked! (not verified)

Last message!

The extra cooler does NOT go on top of the other one. It is used when you go Dry Ice or LN2 as the full cover bracket that comes with it can get in the way of mounting plates.

August 28, 2013 | 08:20 AM - Posted by Ryan Shrout

Noted!

August 28, 2013 | 08:58 AM - Posted by Thedarklord

Something I noticed, and maybe I missed it in the review, but I've been seeing a lot of problems with benchmarking on Windows 8 and it not being accurate due to Real Time Clock issues?

Is that something that was noted in the article or was the hotfix applied?

August 28, 2013 | 09:44 AM - Posted by Anonymous (not verified)

Why for the dirt and skyrim config page screenshot do you show 1080p but show results in 1440p?

August 28, 2013 | 10:15 AM - Posted by Ryan Shrout

Settings screenshots show the SETTINGS but the resolution is variable.  Many times we test at 19x10, 25x14 and 57x10.

August 28, 2013 | 12:30 PM - Posted by Anonymous (not verified)

What is the max the voltage can be set to?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.