Review Index:
Feedback

The NVIDIA GeForce RTX 2070 Review - Featuring EVGA!

Author: Ken Addison
Manufacturer: NVIDIA

Testing Suite and Methodology Update

Along with a new generation of graphics cards comes an updated GPU testbed for us here at PC Perspective. A lot has changed about desktops since we last updated our GPU in 2015. 

With the release of AMD's Ryzen CPUs, and Intel's subsequent Coffee Lake CPUs, core counts have been jumping up in consumer level processors compared to the stagnation of quad-core, eight-thread processors of yore. 

While we usually build our GPU testbeds on Intel's HEDT desktop platform (see our previous was the i7-5960X), this time we decided to go with the six-core, twelve-thread Intel Core i7-8700K. 

Given the increasing core counts of consumer processors, and the dimisnished focus on multiple GPU setups, we felt it was time to bring our GPU testbed to a more resonable price level.

  PC Perspective GPU Testbed (2018)
Processor Intel Core i7-8700K
Motherboard ASUS ROG Z370-H Gaming
Memory

Corsair Vengeance LPX DDR4-3200 

(Running at DDR4-2666)

Storage Samsung 850 EVO 250GB (OS)
Micron 11100 2TB (games)
Power Supply Corsair AX1500i 1500 watt
OS Windows 10 x64 Version 1803 (RS4)
Drivers

AMD: 18.8.2
NVIDIA:  416.15

Discounting the overkill 1600W power supply, we are using because it was already modified for our power measurement needs, and the 2TB secondary SSD for game storage, this new build minus the GPU comes in at just around $1000. We feel this price point is a lot more reasonable than testbeds of yore (with $1600 a processor for example), and more representative of the PC gaming community at large, without providing a bottleneck to the given GPU we are testing.

In testing these new RTX series of GPUs, there are a few interesting comparisons to make. First, we want to look at generation-over-generation performance over the previous GTX-10 series products. Next, of course, we'll want to compare these new GPUs to the fastest currently existing options from both NVIDIA and AMD. The cards we will be testing are listed below:

As for the games we tested, we wanted to update our test suite with some of the most modern PC titles, while remaining a few older titles that are still immensely popular. 

  • Far Cry 5
  • Wolfenstein II: The New Colossus
  • Ashes of the Singularity: Escalation
  • F1 2018
  • Grand Theft Auto V
  • Sniper Elite 4
  • Strange Brigade
  • Witcher 3
  • Hitman (2016)

Our Testing Process

While the way we present our data is a bit tweaked, our testing methodology is not. We are still using the capture-based Frame Rating technique that we helped pioneer back in 2013. For those who are unaware of Frame Rating, you can read this great in-depth breakdown of the process.

As far as the data we are presenting we have simplified this a bit from years past. Instead of presenting a series of 6 graphs from every Frame Rating output, we are now focusing on two major areas—frame rate percentiles, and frame times.

View Full Size

For each game tested, you'll find a bar graph with the average, 95th, and 99th percentile frame rates. This will help give an idea of the relative performance of each GPUs but takes the event important frame consistency into account to determine how smoothly a game was running. Essentially, the closer the average, 95th, and 99th percentile numbers are to each other, the smoother the gaming experience.

View Full Size

Similarly, each game tested will feature a frame time chart. The numbers here represent the amount of time that frames appear on the screen for the user, a “thinner” line across the time span represents frame times that are consistent and thus should produce the smoothest animation to the gamer.  A “wider” line or one with a lot of peaks and valleys indicates a lot more variance and is caused by a lot of runts displayed.

Lastly, we'll be providing a quick summary of relative performance between competing GPUs, calculated from the average frame rate.

Video News


October 16, 2018 | 10:30 AM - Posted by Jabbadap

RTX 2070 FE has dual link DVI connector instead of dp connector too. So it's exactly same display outputs as FE. Interesting pcb though, maybe they release mITX version of this card in the future.

October 16, 2018 | 10:41 AM - Posted by Ken Addison

Whoops, sorry about that. Article corrected!

October 16, 2018 | 11:28 AM - Posted by NvidiasLowEndCostsMoreThisGenerationAlso (not verified)

This is the go to RTX card once all the Pascal options from the GTX 1080 and above are no longer available.

Let's see if Nvidia waits on the RTX 1060 a little longer if there are any remaining GTX 1070Ti and below Pascal SKUs still remaining to be sold off. Nvidia only has to compete with Nvidia mostly and then it's all RTX at higher prices after all the stocks of Pascal GPUs dry up.

It's all good for Nvidia with AMD sitting out this round and not much except some rumored Polaris refresh-2 at 12nm. AMD is all in on 7nm but that's next year. Nvidia will make bank this year on Pascal and Turing while AMD makes bank on Epyc, Ryzen, and Threadripper. AMD only has Vega and Pascal(For the sub $300 dollar market) for diecrete GPU sales opportunities.

Interesting enough, Vega in the form of Integrated Graphics will still have some interesting market share numbers for both Mobile and Desktop APUs and those figures will continue to grow larger regardless of the discrete GPU market that's decidedly a Nvidia advantage. AMD has the advantage for the console market with XBONE, Sony, and that other semi-custom fist Chinese Console/PC that uses a Zen/Vega semi-custom APU SKU.

I can see AMD getting Tensor Core IP available long before AMD can compete with any Nvidia Ray Tracing IP. And this will be at Microsoft's and Sony's behest and because something similar to that Nvidia DLSS IP is just what the console makers would Really Want. So some new IP from AMD's Semi-Custom folks in the form of AI/Tensor Core based Upscaling without needing more shader cores in order to do the Upscaling workloads. AMD's Semi-Custom division is probably already been working on this long before Nvidia's Turing Micro-Arch was fully Known to the public as Microsoft has been working with Nvidia for some time getting the DXR API engineering completed. Both MS and Sony are big users of Upscaling as is the console market in general.

October 16, 2018 | 12:02 PM - Posted by NvidiasLowEndCostsMoreThisGenerationAlsoEdit (not verified)

Edit: AMD only has Vega and Pascal(For the sub $300 dollar market) for diecrete GPU sales opportunities

To: AMD only has Vega and Polaris(For the sub $300 dollar market) for diecrete GPU sales opportunities

Getting the Nvidia and AMD P names mixed up!

October 18, 2018 | 12:33 AM - Posted by bj (not verified)

THere won't be an RTX 1060. It will be known as GTX 2060

October 16, 2018 | 01:26 PM - Posted by Voodoo2-SLi (not verified)

Strange Brigade 1440p benchmark pic is the same as Sniper Elite 4. Please fix. Need that benchmark to compare.

October 16, 2018 | 01:43 PM - Posted by Ken Addison

Sorry! The graph should be fixed now. Thank you for bringing this to my attention.

October 16, 2018 | 01:45 PM - Posted by Voodoo2-SLi (not verified)

Thanks!

October 16, 2018 | 10:55 PM - Posted by Voodoo2-SLi (not verified)

WQHD Performance Index for PC Per's GeForce RTX 2070 Launch Review

132.6% ... GeForce RTX 2080 FE
105.6% ... GeForce RTX 2070 Reference
100% ..... GeForce GTX 1080 FE
80.5% .... GeForce GTX 1070 FE
88.0% .... Radeon RX Vega 64 Reference

Index from 15 other launchreviews with an overall performance index of the GeForce RTX 2070 launch here:
https://www.3dcenter.org/news/geforce-rtx-2070-launchreviews-die-testres...

October 16, 2018 | 02:05 PM - Posted by Rocky1234 (not verified)

Great review thank you.

If the price for this card stays at this price I probably could see myself picking one of these EVGA 2070 Black cards up for my upgrade. At this price point it makes sense to pick one over a 1080 card. Even though I suspect the RT functions probably will be almost useless on the 2070 it would still be good to have the option there.

With all of that said I suspect the prices will be going up on these cards fairly quickly as the supply chain dries up for them.

October 18, 2018 | 12:35 AM - Posted by bj (not verified)

The RTX features that everyone is saying will be useless aren't thinking about the resolution it will probably work at. I still use a 120hz 1080P Monitor for gaming. So for Ray Tracing, I'm sure it will do a lot better at 1080P vs 1440P or 4K.

October 16, 2018 | 03:44 PM - Posted by Don't panic! (not verified)

500$ if you can get one at that price is far from being the sweet spot between performance and price. Ok maybe if you consider only RTX family then yes rtx 2070 offers best value from these cards.

I see all the data from this review and also from other places and I just can't come to the conclusion to give this card a gold award. At 400$ then yeah it's good. At 500$ it's way overpriced.

It just very sad to see the price keeps creeping up every generation(especially this gen).

October 16, 2018 | 10:28 PM - Posted by svnowviwvn

The RTX 1070 is faster than a GTX 1080 and you somehow think it should be sold for $50 under what the lowest price a GTX 1080 goes for on newegg. Get real!!!

October 17, 2018 | 04:02 AM - Posted by lololol (not verified)

clowns like you would justify the 2080RTX being 100k$ because it's 10000 times faster than a 3dfx voodoo banshee.

it's good that the world is not only made of clowns like you.

enough of them to get the high end from 450$ to 1250$ in just the time it took to get from the GTX580 to the 2080 still.

October 17, 2018 | 08:25 AM - Posted by Stef (not verified)

Hmmm... what's the MSRP of a Vega 64? then why the RTX 2070 should be cheaper?

October 17, 2018 | 12:59 PM - Posted by Don't panic! (not verified)

Because of generational leap. I mean in regards to price-to-performance its almost same as polaris refresh. And i didn't notice any rewards given to rx 580.

October 17, 2018 | 01:43 PM - Posted by Stef (not verified)

Is performance/price ratio the only parameter they look at to assign an award?
The RX580 didn't offered any new feature or efficiency improvement, it was just a rebrand as you said. Why even consider a rebranded product for an award? God forbid :)

October 17, 2018 | 03:05 PM - Posted by Don't panic! (not verified)

You mean the features that aren't available to consumer and couldn't even been tested out? You can't rate a product based on a promise (and you should not buy one either). Realistically ray tracing and DLSS are meaningfully available to consumer by the next gen launch.
Only feature you can test out is asyncronous compute, but this feature translates directly into performance numbers. So what else - efficiency. Yeah I'm all about that, but it doesn't cut it when price-to-performance is not there.

That what I said RX580 didn't deserve an award and rtx 2070 shouldn't either.

October 17, 2018 | 06:01 AM - Posted by othertomperson (not verified)

Wait wait wait hang on. TU 106? You mean to say that what has historically been the x60 tier chip is now in the x70 GPU, with the price bump to match?

This is a £200 tier product, rightfully the RTX 2060, masquerading as high end.

As far as performance goes, I bought two of these already two years ago. Only mine support SLI so I can actually use them both.

October 17, 2018 | 08:21 AM - Posted by Stef (not verified)

Wait wait wait hang on. TU106 die size is 445mm², just a bit smaller than the one of Vega 64 which has the same MSRP(the hair cooled one), the GP106 in the GTX 1060 is a much smaller chip at 200mm²

Small hint, production cost isn't related in any way to the chip name...

October 17, 2018 | 04:43 PM - Posted by othertomperson (not verified)

Wow you're totally right. If only Nvidia weren't fully in control over their own chip design.

I hope whoever has a gun to Jen-Hsun's head and is forcing his company to make such idiotic design choices is feeling bad about themselves...

October 18, 2018 | 09:17 AM - Posted by Stef (not verified)

NVIDIA should definitely hire you as CTO and CEO given your incredibly deep knowledge in chip design and your forward-looking vision

October 19, 2018 | 04:00 PM - Posted by othertomperson (not verified)

They probably should tbh given the blunders they made this year and how everything I called ended up happening. This launch wouldn't have been such a clusterfuck if they didn't respond months late, and in an overly heavy-handed way to the mining boom. AMD, while unfortunate to release in the middle of it, were wise to not ramp up production in response to it. Now Nvidia has to get rid of an overly competitive Pascal before Turing is remotely compelling.

And as far as chip design goes, even a staunch corporate apologist such as yourself must recognise that selling a 775mm^2 to gamers is mental. If you are correct and Nvidia truly are so cash strapped that they have genuinely needed to double the price of every SKU in the four years since Maxwell, then they are truly screwing the pooch somewhere.

October 17, 2018 | 06:53 AM - Posted by Mehdi (not verified)

Ohhhh come on !!!!! Where's the comparaison with the 1070ti a DAMN TI VERSION ????? Making comparaision with all cards but no 1070TI ARE YOU KIDDING ME PCPER ????????? Sorry.

October 17, 2018 | 10:01 AM - Posted by Subsailor

GTX 1070 beat the performance of the previous flagship (980ti) by approx 10% and was only $379 compared to the $650 launch price of the 980ti.

RTX 1070 is only 10% better then the non-flagship 1080 and costs the same $500.

How does this card rate any award at all? This is not a "sweet spot between performance and price", it's a terrible value for the money compared to the last generation.

October 17, 2018 | 11:55 AM - Posted by RaysMaybeNotButAIandTensorCoreBasedUpscalingForeSure (not verified)

Look Nvidia made RTX for the Pro Market and those folks do not need realtime that much. Look at the GP102 Psacal die that mas made for Pro usage first with Nvidia only later taking a Binned GP102 die and creating the GTX 1080Ti.

The GP104 base die tapeout was only meant for consumer GPU SKUs and the GTX 1080 had the full die while the GTX 1070 was from a binned GP104 die that did not have enough working units to be made into a 1080. GP106 was consumer also and the GTX 1060 came from that base die tapeout.

Now for TU102 the top bins of that base die tapeout will be used for Quadros as usual with the RTX 2080Ti still the lowest bin from that TU102 base die tapeout. But this time around the top TU104 base die tapeout starts wih a Quadro Variant and TU104 is not simply a Consumer Only tapeout anymore. The RTX 2080 is made from a Binned TU104 base die and has less resources than the top binned TU104 derived part that's for Pro/Quadro market. The TU106 based RTX 2070 is not a binned TU104 variant this generation as the 2070 is the Top End TU106 variant this generation.

So Nvidia has Quadro as the major beneficiary of that RTX ray tracing technology and that Pro market will definitely make use of Ray Tracing(not necessarily in real time) and Tensor Cores.

Now Tensor Core technology will definitely benifit gaming more than just the Ray Tracing IP as that AI based upscaling still works even with the ray tracing turned off so gaming performance via DLSS will not be hindered like happens with Ray Tracing turned on.

In fact Nvidia's "Real Time" Ray tracing RT cores can not produce enough rays in the 33.33(30FPs) down to 16.76(60FPS) frame times and below time ranges so Nvidia has to denoise that limited Ray Tracing output with a AI based denoising algorithm that's running on the tensor cores. AI based Upscaling via the Tensor Cores based DLSS is the big game changer for Nvidia and faster 4K gaming via DLSS and 1440p output upscaled via DLSS to look more like native 4K and allow even higher frame rates at Upscaled-4K via the tensor cores based DLSS Trained AI.

So even more so this time around Nvidia major market focus is the professional market with even more Nvidia base die tapouts variants, TU102 and TU104, having Quadro in mind before any consumer binnied varints are made. TU106 is now the only current tapeout that is all consumer oriented from top to bottom most likely starting with the RTX 1070 and below.

Gamers may not need or want Ray Tracing but they will probably want DLSS as DLSS is not going to to slow down average frame rates. DLSS and 1440p output upscaled to 4K is going to result in higher average frame rates at 4K if that DLSS Technology works as Nvidia has stated.

You can almost be guaranteed to see AMD get some form of "DLSS" like Tensor Core IP in its semi-custom console APUs ASAP as Microsoft and Sony are heavily into using upscaling on their respective console offerings. That will be the first thing(Tensor Cores) that AMD would want for the professional AI market and consumer markets(AI based Upscaling).

Real Time Ray Tracing is not a priority for AMD as the professional Graphics/Animation markets can do all the ray tracing that they require in non real time accelerated fashon via OpenCL or CUDA on the GPU's shader cores without the need for any in hardware Ray Tracing cores. Some Animation Houses are still using CPU clusters for Ray Tracing also where ray tracing workloads where done traditionally.

October 17, 2018 | 12:01 PM - Posted by RaysMaybeNotButAIandTensorCoreBasedUpscalingForeSure (not verified)

Edit: 33.33(30FPs) down to 16.76(60FPS) frame times

To: 33.33ms(30FPs) down to 16.67ms(60FPS) frame times

October 18, 2018 | 05:10 AM - Posted by Johan SteynJohan Steyn (not verified)

There is no logic how it got a gold award. But maybe the wallets area bit thicker...

October 18, 2018 | 12:41 AM - Posted by bj (not verified)

Will PCPER be able to compare the EVGA RTX 2070 XC vs the RTX 2070 XC Ultra models? I'm interested in finding out the thermal performance difference between a 2 slot and 3 slot card is. Is it worth the extra $20 price difference?

October 18, 2018 | 05:09 AM - Posted by Johan Steyn (not verified)

Yip, I came here since I left for a while and I see why I left. A rediculous article. While the net is hot about how overpriced epecially the 2070 is, you give it a gold award. Wow.

Bye then again. How much did nVidia pay you this time?

October 18, 2018 | 02:51 PM - Posted by Anonymous-911 (not verified)

A friend got a 2080 by surprise from Amazon and he doesn't even have a computer. So i quickly offered him $500 and then after thinking about it, i switched it to $300... because if you are already playing in 1080 what am i going to gain. I could go 4k, pretty sweet, forza in 4k. But what else, i'm not going to jump to battle royale in 4k. Can't predict that future games will run in 4k on the 2080. So $300 seems fair. This is the reality yet the video cards jumped in price. I guess people that want videocards pay whatever price the piper is setting. pcper Gold Award because it's the same price as the crap that came out 2 years ago and 10% faster. Moore is rolling in his grave.

You need a videocard and have money to throw away, get the 2080ti, don't be a chump

October 19, 2018 | 08:43 AM - Posted by wolsty7

you might want to check the vega 64 sniper elite 1440p results, it surely doesn't get 50% worse at 1440 than 4k ...

October 23, 2018 | 12:40 AM - Posted by Enquiring Minds Want to Know (not verified)

Hi Ken,

Thank you for this review.

When Nvidia specifies "Ray Tracing Speed" does it say how many reflections, or what sort of color calculation is used? I assume they're hitting triangles.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.