Author:
Manufacturer: ASUS

Overview

With the release of the NVIDIA GeForce RTX 2080 and 2080 Ti just last week, the graphics card vendors have awakened with a flurry of new products based on the Turing GPUs.

Today, we're taking a look at ASUS's flagship option, the ASUS Republic of Gamers STRIX 2080 Ti.

ASUS ROG STRIX 2080 Ti
Base Clock Speed 1350 MHz
Boost Clock Speed 1665 MHz
Memory Clock Speed 14000 MHz GDDR6
Outputs DisplayPort x 2 (v1.4) / HDMI 2.0b x 2 / USB Type-C x1 (VirtualLink)
Dimensions

12 x 5.13 x 2.13 inches (30.47 x 13.04 x 5.41 cm)

Price $1249.99

DSC05198.JPG

For those of you familiar with the most recent STRIX video cards, the GTX 1080 Ti, and the RX Vega 64, the design of the RTX 2080 Ti will be immediately familiar. The same symmetric triple fan setup is present, contrasted against some of the recent triple fan designs we've seen from other manufacturers with different size fans.

DSC05207.JPG

Just as with the STRIX GTX 1080 Ti, the RTX 2080 Ti version features RGB lighting along the fan shroud of the card. 

Continue reading our review of the ASUS ROG STRIX RTX 2080 Ti!

Goodbye NDA, hello RTXs!

Subject: Graphics Cards | September 19, 2018 - 01:35 PM |
Tagged: turing, tu102, RTX 2080 Ti, rtx, ray tracing, nvidia, gtx, geforce, founders edition, DLSS

Today is the day the curtain is pulled back and the performance of NVIDIA's Turing based consumer cards is revealed.  If there was a benchmark, resolution or game that was somehow missed in our review then you will find it below, but make sure to peek in at the last page for a list of the games which will support Ray Tracing, DLSS or both! 

The Tech Report found that the RTX 2080 Ti is an amazing card to use if you are playing Hellblade: Senua's Sacrifice as it clearly outperforms cards from previous generations as well as the base RTX 2080.  In many cases the RTX 2080 matches the GTX 1080 Ti, though with the extra features it is an attractive card for those with GPUs several generations old.  There is one small problem for those looking to adopt one of these cards, we have not seen prices like these outside of the Titan series before now.

card34.jpg

"Nvidia's Turing architecture is here on board the GeForce RTX 2080 Ti, and we put it through its paces for 4K HDR gaming with some of today's most cutting-edge titles. We also explore the possibilities of Nvidia's Deep Learning Super-Sampling tech for the future of 4K gaming. Join us as we put Turing to the test."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Author:
Manufacturer: NVIDIA

A Look Back and Forward

Although NVIDIA's new GPU architecture, revealed previously as Turing, has been speculated about for what seems like an eternity at this point, we finally have our first look at exactly what NVIDIA is positioning as the future of gaming.

geforce-rtx-2080.png

Unfortunately, we can't talk about this card just yet, but we can talk about what powers it

First though, let's take a look at the journey to get here over the past 30 months or so.

Unveiled in early 2016, Pascal marked by the launch of the GTX 1070 and 1080 was NVIDIA's long-awaited 16nm successor to Maxwell. Constrained by the oft-delayed 16nm process node, Pascal refined the shader unit design original found in Maxwell, while lowering power consumption and increasing performance.

Next, in May 2017 came Volta, the next (and last) GPU architecture outlined in NVIDIA's public roadmaps since 2013. However, instead of the traditional launch with a new GeForce gaming card, Volta saw a different approach.

Click here to continue reading our analysis of NVIDIA's Turing Graphics Architecture

Turing vs Volta: Two Chips Enter. No One Dies.

Subject: Graphics Cards | August 21, 2018 - 08:43 PM |
Tagged: nvidia, Volta, turing, tu102, gv100

In the past, when NVIDIA launched a new GPU architecture, they would make a few designs for each of their market segments. All SKUs would be one of those chips, with varying amounts of it disabled or re-clocked to hit multiple price points. The mainstream enthusiast (GTX -70/-80) chip of each generation is typically 300mm2, and the high-end enthusiast (Titan / -80 Ti) chip is often around 600mm2.

nvidia-2016-gtc-pascal-banner.png

Kepler used quite a bit of that die space for FP64 calculations, but that did not happen with consumer versions of Pascal. Instead, GP100 supported 1:2:4 FP64:FP32:FP16 performance ratios. This is great for the compute community, such as scientific researchers, but games are focused on FP32. Shortly thereafter, NVIDIA releases GP102, which had the same number of FP32 cores (3840) as GP100 but with much-reduced 64-bit performance… and much reduced die area. GP100 was 610mm2, but GP102 was just 471mm2.

At this point, I’m thinking that NVIDIA is pulling scientific computing chips away from the common user to increase the value of their Tesla parts. There was no reason to either make a cheap 6XXmm2 card available to the public, and a 471mm2 part could take the performance crown, so why not reap extra dies from your wafer (and be able to clock them higher because of better binning)?

nvidia-2017-sc17-japanaisuper.jpg

And then Volta came out. And it was massive (815mm2).

At this point, you really cannot manufacture a larger integrated circuit. You are at the limit of what TSMC (and other fabs) can focus onto your silicon. Again, it’s a 1:2:4 FP64:FP32:FP16 ratio. Again, there is no consumer version in sight. Again, it looked as if NVIDIA was going to fragment their market and leave consumers behind.

And then Turing was announced. Apparently, NVIDIA still plans on making big chips for consumers… just not with 64-bit performance. The big draw of this 754mm2 chip is its dedicated hardware for raytracing. We knew this technology was coming, and we knew that the next generation would have technology to make this useful. I figured that meant consumer-Volta, and NVIDIA had somehow found a way to use Tensor cores to cast rays. Apparently not… but, don’t worry, Turing has Tensor cores too… they’re just for machine-learning gaming applications. Those are above and beyond the raytracing ASICs, and the CUDA cores, and the ROPs, and the texture units, and so forth.

nvidia-2018-geforce-rtx-turing-630-u.jpg

But, raytracing hype aside, let’s think about the product stack:

  1. NVIDIA now has two ~800mm2-ish chips… and
  2. They serve two completely different markets.

In fact, I cannot see either FP64 or raytracing going anywhere any time soon. As such, it’s my assumption that NVIDIA will maintain two different architectures of GPUs going forward. The only way that I can see this changing is if they figure out a multi-die solution, because neither design can get any bigger. And even then, what workload would it even perform? (Moment of silence for 10km x 10km video game maps.)

What do you think? Will NVIDIA keep two architectures going forward? If not, how will they serve all of their customers?