Feedback

ASUS Republic of Gamers STRIX RTX 2080 Ti - All Around Improvement

Author:
Manufacturer: ASUS

Overview

With the release of the NVIDIA GeForce RTX 2080 and 2080 Ti just last week, the graphics card vendors have awakened with a flurry of new products based on the Turing GPUs.

Today, we're taking a look at ASUS's flagship option, the ASUS Republic of Gamers STRIX 2080 Ti.

ASUS ROG STRIX 2080 Ti
Base Clock Speed 1350 MHz
Boost Clock Speed 1665 MHz
Memory Clock Speed 14000 MHz GDDR6
Outputs DisplayPort x 2 (v1.4) / HDMI 2.0b x 2 / USB Type-C x1 (VirtualLink)
Dimensions

12 x 5.13 x 2.13 inches (30.47 x 13.04 x 5.41 cm)

Price $1249.99

View Full Size

For those of you familiar with the most recent STRIX video cards, the GTX 1080 Ti, and the RX Vega 64, the design of the RTX 2080 Ti will be immediately familiar. The same symmetric triple fan setup is present, contrasted against some of the recent triple fan designs we've seen from other manufacturers with different size fans.

View Full Size

Just as with the STRIX GTX 1080 Ti, the RTX 2080 Ti version features RGB lighting along the fan shroud of the card. 

Continue reading our review of the ASUS ROG STRIX RTX 2080 Ti!

View Full Size

For those of you who aren't a fan of lighting on their graphics card, ASUS has included a push button on the backplate of the GPU, allowing a quick, one-touch way to disable all lighting effects. This setting persists through a reboot, allowing you to turn off the RGBs without needing to install any software, a much-needed addition.

While the exterior of the STRIX 2080 Ti is very similar to the GTX 1080 Ti version, there are some changes to the actual heatsink design.

View Full Size

Through a move to a taller cooler, occupying 3 PCIe-slots, ASUS was able to increase the overall surface area by a claimed 20%.

View Full Size

On the connectivity front, ASUS deviated from the design found on the Founders Edition card, switching out one of the DisplayPort connections for an additional HDMI. This brings the total connectivity to two DisplayPort connections, two HDMI ports, as well as the USB-C port for VirtualLink. While this makes it difficult to do more exotic display scenarios like a triple G-Sync Surround configuration, I think the additional HDMI port offers convenient flexibility to the average gamer.

View Full Size

Just as we saw with the reference design, the STRIX 2080 Ti feature two 8-pin PCIe power connectors. 

View Full Size

Along the top edge of the card, there is a small switch to change between two different BIOS profiles, one for Quiet and the other for Performance. ASUS claims the card will run around 40% cooler in Performance mode, while Quiet mode offers a sound reduction of 15-25%. The card as we received it defaulted to the Performance mode.

Performance

For our out of the box performance testing, we left the STRIX RTX 2080 Ti in Performance mode and ran a few game benchmarks to compare to the Founders Edition RTX 2080 Ti.

View Full Size

View Full Size

View Full Size

The STRIX RTX 2080 Ti in its stock Performance configuration only offers a small margin above the Founders Edition card, around 2% in GTA V and Ashes of the Singularity: Escalation. However, we did see an impressive gain of 5% in Hitman.

Sound and Thermals

Given ASUS's inclusion of two different BIOS modes on the STRIX RTX 2080 Ti, we decided to test stock sound and thermal performance levels in both configurations.

View Full Size

While both Performance and Quiet mode on the ROG STRIX RTX 2080 Ti offer a 4 and 5dBA drop respectively over the Founders Edition RTX 2080 Ti, there's only a small 1 dBA difference between Quiet mode and Performance mode.

The STRIX RTX 2080 Ti also offers a 0-dB fan mode where the fans don't start spinning until the GPU hits a temperature of 55 degrees Celsius. While this mode works by default in Quiet mode, you need the ASUS GPU Tweak software installed in Windows for it to work in Performance mode.

View Full Size

This test shows both the stability of the GPU core clock speed, as well as what temperatures the GPU hit over a 10-minute period in Unigine Heaven. 

Overclocking

For our overclocking testing with the STRIX RTX 2080 Ti, we were able to hit a stable +140 MHz GPU clock offset, with a +100mV offset, and 125% power target.

View Full Size

Clock speeds in the overclocked state stabilize just above 2.0 GHz, like we've seen with every other Turing-based product so far. GPU temperatures remain stable compared to the stock state, around 65C.

View Full Size

At their maximum stable overclock levels, the STRIX RTX 2080 Ti and the Founders Edition hit very similar clock speeds, stabilizing at around 2025 MHz.

View Full Size

Despite reaching very similar clock speeds as the Founders Edition RTX 2080 Ti when overclocked, the STRIX option maintains a temperature almost 15 degrees Celsius lower.

View Full Size

Performance wise, this overclock gives the STRIX RTX 2080 Ti an almost 10% advantage over the stock 2080 Ti Founders Edition. Keep in mind though, that given the similar clock speeds we saw, an overclocked Founders Edition card should score very similarly, albeit at much warmer temperatures.

View Full Size

Power consumption increases from 250W in the stock configuration to a maximum of 325W in overclocked mode, a 30% increase.

View Full Size

From an industry perspective, it's important that ASUS is making cards like the ROG STRIX 2080 Ti. In a world where NVIDIA keeps pushing further and further into what was traditionally in the add-in board partners territory with the Founders Edition, you could see a reality where AIBs start to become less and less relevant. However, with graphics card product lines like STRIX, it's clear that the push-and-pull between the AIBs and NVIDIA is advantageous for the consumer, in the form of high-quality products.

The ASUS ROG STRIX 2080 Ti is set to be available in the coming weeks for a retail price of $1249.99, just $50 above the price of the RTX 2080 Ti Founders Edition from NVIDIA.

View Full Size

While the $1250 price tag might be hard to swallow for most gamers, for those looking for an RTX 2080 Ti, we highly recommend the ASUS ROG STRIX RTX 2080 Ti.

For just an extra $50, the cooling performance, lower noise levels, and higher overclocking ability make it a no-brainer to pick up the STRIX 2080 Ti over the Founders Edition, as long as your case can support the massive size.

Review Terms and Disclosure
All Information as of the Date of Publication
How product was obtained: The product is on loan from ASUS for the purpose of this review.
What happens to the product after review: The product remains the property of ASUS but is on extended loan for future testing and product comparisons.
Company involvement: ASUS had no control over the content of the review and was not consulted prior to publication.
PC Perspective Compensation: Neither PC Perspective nor any of its staff were paid or compensated in any way by ASUS for this review.
Advertising Disclosure: ASUS has purchased advertising at PC Perspective during the past twelve months.
Affiliate links: This article contains affiliate links to online retailers. PC Perspective may receive compensation for purchases through those links.
Consulting Disclosure: ASUS is not a current client of Shrout Research for products or services related to this review. 

September 27, 2018 | 08:57 PM - Posted by fla56 (not verified)

this is just stupid, marginal gains for $1250(!)

as with Intel wait for the next AMD launch to get a fair price for your preferred card...

September 28, 2018 | 01:44 AM - Posted by Genova84

Not for nothing, but "most" reviews compare the performance against prior cards for reference. In other words,I shouldn't have to click to another article to know how well a 1080ti does in comparison with the 2080 and 2080ti. Like maybe I'm crazy, but the frame rates for GTAV seem lower than I recall getting with my 1080ti ... Maybe not. I don't know. I'm getting old.

September 28, 2018 | 09:08 AM - Posted by Stef (not verified)

They switched from a i7-5960X to a i7-8700K and GTA perf got hit by that

September 28, 2018 | 11:57 AM - Posted by Ken Addison

We have substantially more data points in our initial RTX 2080 and 2080 Ti review. This particular review is comparing the relative performance of different RTX 2080 Ti cards. The comparison between the 1080 Ti and 2080 Ti as a whole isn't changed, so we simplified the data we presented.

However, I'll take this feedback into account for future reviews!

September 28, 2018 | 11:06 PM - Posted by Genova84

Thanks Ken. I read that one as well. If the point was to compare the TIs I guess I was thrown off by the 2080 (non-Ti) being included as well.

September 29, 2018 | 01:38 PM - Posted by KingKookaluke (not verified)

Thanks Ken.
I like to see the older cards stats as well when considering an upgrade.Though I can’t justify this leap from 1080ti’s yet.

September 28, 2018 | 01:29 PM - Posted by GPUsBeHardToUnderstandWithout100sOfHoursOfReading (not verified)

You should always read more reviews from every website that does any reviews of any GPUs/CPUs/Other PC components just because there may be differences in results across verious reviews. And adoredTV has some videos commenting on the subject and various reviews are looked at with Jim putting various review sites' results in a spreadsheet and doing the math to see where any differences in results may be showing up. And Jim does a good job pointing out differences in results and he tries to get at why there are any differences in results.

Yes there are prior generations of cards to be directly compared with and that should be a readers' litmus test(Lack of direct review content comparing to the previous generation's GPU SKUs against the current generation SKUs) to indicate that maybe there is more spin going on than actual impartial reviewing!

Readers should be more aware that they should never rely on a single set of a single sight's review content and readers should read each any every review possible across many online sources including some owner reviews that show up on the various Reddit/Other related hardware blogs.

If you are gaming on current gaming titles that do not make use of any new GPU's technology that's not currently supported in the current gaming title ecosystem(Legacy Games) then those legacy games that are only supporting raster type games then the GTX 1080Ti is not really that much different from the RTX 2080Ti. But there still are differences in the Newer RTX/Turing generation of SKUs that may be different such as total Memory Bendwidth being greater on GDDR6 than on GDDR5/5X.

So one quick refrence over to TechPowerUp's GPU database and the RTX 2070 fro example has a bit more Bandwidth than the GTX 1070, if the Provisional entry for the RTX 2070 in TechPowerUp GPU database is correct, then the RTX 2070's Memory Bandwidth(GDDR6) is 448.0 GB/s compared to the GTX 1070's 256.3 GB/s on GDDR5. Both the RTX 2070 and GTx 1070 have 256 bit wide memory buses.

Between the GTX/Pascal generation SKUs and the RTX/Turing generation SKUs Nvidia did not increase the overall ROP counts for Turing compared to Pascal. But some FP16 numbers on Turing will be much larger than FP16 on Pascal, ditto of FP32/other theoretical maximum performance metrics. So check TechPowerUp's GPU database often unti the final RTX/Turing figures are Known. It should be noted also that some of the Turing genration's Shader to ROP to TMU rations are different and that the GTX 1070 is binned down from the GP104 base die tapeout the asme base die tapeout that's used for the GTX 1080 while the Turing Micro-Arch based RTX 2070 comes from the TU106 Base die tapeout and NOT TU104.

So RTX 2070's SM arrangement is different among other differences to Pascal and Turing's Shader core counts are higher also for the RTX 2070. Nvidia also has rejiggered the Shader cores on Turing for dual issue of Int and FP so that's going to improve some performance also over Pascal.

Overall readers will need to hone their research skill sets and not expect that the complete picture will be brought to the reader's attention by any review site. There is the existance of the GPU marketing department's review manual that's more geared towards obsfucation of all the relevent facts in order to spin the latest and greatest to prospective customers. Free review Samples come with strings attatched and that's an industry wide issue.

Even Wikipedia and Wikichip are not fully documanting any GPUs' Shaders to ROPs to TMUs ratios metrics that exist in the GPU's hardware! So Some SM based Units on Nvidia's GPU SKUs are different than Other SM based Units in total SM numbers with respect to SM unit ratios to other ROP, TMU counts/ratios on the GPU's hardware, larger hierarchically organized Structures that are nade up of SM, Shader cores, TUMs/other. And that has different performance affects among the various GPU variants of the same and different generation GPU micro-archs.

Nvidia's Blogs have more information regarding GPU SM/Other units rations and the respective Unit differences of their GPU variants. Read all the whitpapers and developer blogs also.

Folks need to begin to ask Wikipedia and Wikichip to begin to offer more detailed GPU Shader to TMU to ROP ratio information and also any SM to other larger Unit hierarchical arrangements that are made up of various numbers of SMs like TPCs(texture-processing cluster), other ratios. Ditto for AMD's and Intel's Graphics products.

The different functional blocks on any GPUs out there also experience their own on GPU die interconnect bandwidth issues that the benchmarking/SDK software folks Know about and they should be producing Benchmarks to stress test the GPU's internal cache subsystems and bandwidth limitations and other such GPU processor limitations.

There are loads of performance monitoring tools provided to the GPU programming industry mostly provided by the GPU's maker and by some third party SDK software developers(SDK Plugins) that are intended to assist gaming engine/games developers in debugging/optimizing their code to perform better on that GPU maker's specific GPU SKU and GPU micro-arch. And that's what David Kanter(Real World Technologies) made use of for his Pascal Tiled Based Rendering testing article instead of just running the industry provided benchmarking software.

The CPU industry has a longer history of specifically targeted benchmarks to test things like Cache, Memory and other CPU subsystems latency/bandwidth but the GPU industry mostly hides/obsfucates its respective GPU Shader Core IP from a more detailed analysis.

Also GPUs are a bit more parallel and much harder to programm than CPUs so there are not many folks than can properly test things on their own compared to CPUs. GPUs are whole complexes of thousands of cores that are hierarchically organized even on monolithic GPU dies to operate more as independently operating Units like AMDs GCN CU(Compute units) or Nvidia's TPCs(texture-processing cluster) and other hierarchically organized structures that the Review Sights are not discussing in much detail. so readers are on their own pretty much to research that information from themselves.

September 28, 2018 | 11:32 AM - Posted by Anonymouse (not verified)

Pay more for a hideous and oversized card that still his the same performance as an FE card with a mild overclock (albeit due to Turing already being well tuned near the limit or of the box)? Nooooo thank you!

September 28, 2018 | 04:57 PM - Posted by mouf

actually it shows the performance is slightly higher plus the significant temperature and noise decrease over the founders card....

September 28, 2018 | 08:22 PM - Posted by JHHgetsTheGreenPascalOrTuringItsChaChingAndToTheBankHeSwings (not verified)

$1249.99 and Nvidia must be really hoping that more folks will be buying up all that excess Pascal Based inventory because Nvidia wants it all reduced down before it's yearly reporting time where Nvidia will have to include that unsold embarrassing market mistake on the books as an excess unsold last generation product inventory.

Why did Nvidia not learn from AMD's mining inventory mistake of the past. Nvidia can charge what it wants if there is no AMD competing product that can match the GTX 1080Ti's performance metrics. So Nvidia setting its RTX SKU's MSRP so high is a viable option this time around while Nvidia makes sales from both its Turing and its Pascal based offerings. So pascal sales in addition to any folks with Trust Funds so large that the around $1250 RTX/Ti price point is chump change anyways. So either way against the 1080Ti, or 2080Ti, AMD curently has no competition for the high end consumer/gaming market.

JHH will gladly sell Pascal or Turing as it's all good for Nvidia either way! Nvidia's extra high MSRPs for its RTX line is probably somewhat influnced by Nvidia's excess Pascal inventory that will be sold to those without the funds to afford RTX/Turing and Nvidia can always let the AIBs partners price lower with Nvidia offering supplier rebates if needed to any AIB's to move more RTX in the future. Nvidia's also requires of its RTX AIB partners that they take some excess Pascal inventory in order to get a chance at more RTX inventory and Nvidia has the upper hand in the high end discrete GPU market.

It's all Cha Ching and in the bank anyways for Team Green!

September 29, 2018 | 01:42 PM - Posted by Anonymouse (not verified)

Go check derbauers OC testing. You hit frequency limits LONG before you have any issues with the power limit. Even with hard voltmodding way beyond what any partner card would ship with (plus dry ice cooling), frequency cannot scale above what the FE card can do with the stock power limit.

September 28, 2018 | 11:55 AM - Posted by remc86007

I bought a GTX 280 in 2008 when it was about the best graphics card on the market. It was $650 MSRP and I think I paid $550 within a few weeks of release. It provided a much, much larger increase in performance over the prior generation than this card. The MSRP of the GTX 280 adjusted for inflation is now $760. I know AMD is a non-factor, but $1250 is just absurd.

September 30, 2018 | 09:01 AM - Posted by working on it (not verified)

No point in complaining about the cards performance, the only way is for everyone to stop buying substandard and undervalued upgrades.
This is the only way companies are going to be stopped from drip feeding costumers

September 30, 2018 | 11:49 AM - Posted by Cyclops

Ken, is there a reason you're using Unigine Heaven as a stress test? I don't know what resolution you use for stress testing those cards but these older benchmarks are bound to generate a ton of FPS on newer hardware and run into a CPU bottleneck, which reduces the load on the GPU.

Are those GPUs at 100% utilization all the time during the stress test?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.