Manufacturer: PC Perspective

AMD and NVIDIA GPUs Tested

Tom Clancy’s The Division 2 launched over the weekend and we've been testing it out over the past couple of days with a collection of currently-available graphics cards. Of interest to AMD fans, this game joins the ranks of those well optimized for Radeon graphics, and with a new driver (Radeon Software Adrenalin 2019 Edition 19.3.2) released over the weekend it was a good time to run some benchmarks and see how some AMD and NVIDIA hardware stack up.

d2-key-art-1920x600.jpg

The Division 2 offers DirectX 11 and 12 support, and uses Ubisoft's Snowdrop engine to provide some impressive visuals, particularly at the highest detail settings. We found the "ultra" preset to be quite attainable with very playable frame rates from most midrange-and-above hardware even at 2560x1440, though bear in mind that this game uses quite a bit of video memory. We hit a performance ceiling at 4GB with the "ultra" preset even at 1080p, so we opted for 6GB+ graphics cards for our final testing. And while most of our testing was done at 1440p we did test a selection of cards at 1080p and 4K, just to provide a look at how the GPUs on test scaled when facing different workloads.

Tom Clancy's The Division 2

d2-screen1-1260x709.jpg

Washington D.C. is on the brink of collapse. Lawlessness and instability threaten our society, and rumors of a coup in the capitol are only amplifying the chaos. All active Division agents are desperately needed to save the city before it's too late.

d2-screen4-1260x709.jpg

Developed by Ubisoft Massive and the same teams that brought you Tom Clancy’s The Division, Tom Clancy’s The Division 2 is an online open world, action shooter RPG experience set in a collapsing and fractured Washington, D.C. This rich new setting combines a wide variety of beautiful, iconic, and realistic environments where the player will experience the series’ trademark for authenticity in world building, rich RPG systems, and fast-paced action like never before.

d2-screen3-1260x709.jpg

Play solo or co-op with a team of up to four players to complete a wide range of activities, from the main campaign and adversarial PvP matches to the Dark Zone – where anything can happen.

Continue reading our preview of GPU performance with The Division 2

Manufacturer: NVIDIA

Turing at $219

NVIDIA has introduced another midrange GPU with today’s launch of the GTX 1660. It joins the GTX 1660 Ti as the company’s answer to high frame rate 1080p gaming, and hits a more aggressive $219 price point, with the GTX 1660 Ti starting at $279. What has changed, and how close is this 1660 to the “Ti” version launched just last month? We find out here.

GTX_1660_cards.jpg

RTX and Back Again

We are witnessing a shift in branding from NVIDIA, as GTX was supplanted by RTX with the introduction of the 20 series, only to see “RTX” give way to GTX as we moved down the product stack beginning with the GTX 1660 Ti. This has been a potentially confusing change for consumers used to the annual uptick in series number. Most recently we saw the 900 series move logically to 1000 series (aka 10 series) cards, so when the first 2000 series cards were released it seemed as if the 20 series would be a direct successor to the GTX cards of the previous generation.

But RTX ended up being more of a feature level designation, and not so much a new branding for GeForce cards as we had anticipated. No, GTX is here to stay it appears, and what then of the RTX cards and their real-time ray tracing capabilities? Here the conversation changes to focus on higher price tags and the viability of early adoption of ray tracing tech, and enter the internet of outspoken individuals who decry ray-tracing, and more so DLSS; NVIDIA’s proprietary deep learning secret sauce that has seemingly become as controversial as the Genesis planet in Star Trek III.

  GTX 1660 GTX 1660 Ti RTX 2060 RTX 2070 GTX 1080 GTX 1070 GTX 1060 6GB
GPU TU116 TU116 TU106 TU106 GP104 GP104 GP106
Architecture Turing Turing Turing Turing Pascal Pascal Pascal
SMs 22 24 30 36 20 15 10
CUDA Cores 1408 1536 1920 2304 2560 1920 1280
Tensor Cores N/A N/A 240 288 N/A N/A N/A
RT Cores N/A N/A 30 36 N/A N/A N/A
Base Clock 1530 MHz 1500 MHz 1365 MHz 1410 MHz 1607 MHz 1506 MHz 1506 MHz
Boost Clock 1785 MHz 1770 MHz 1680 MHz 1620 MHz 1733 MHz 1683 MHz 1708 MHz
Texture Units 88 96 120 144 160 120 80
ROPs 48 48 48 64 64 64 48
Memory 6GB GDDR5 6GB GDDR6 6GB GDDR6 8GB GDDR6 8GB GDDR5X 8GB GDDR5 6GB GDDR5
Memory Data Rate 8 Gbps 12 Gbps 14 Gbps 14 Gbps 10 Gbps 8 Gbps 8 Gbps
Memory Interface 192-bit 192-bit 192-bit 256-bit 256-bit 256-bit 192-bit
Memory Bandwidth 192.1 GB/s 288.1 GB/s 336.1 GB/s 448.0 GB/s 320.3 GB/s 256.3 GB/s 192.2 GB/s
Transistor Count 6.6B 6.6B 10.8B 10.8B 7.2B 7.2B 4.4B
Die Size 284 mm2 284 mm2 445 mm2 445 mm2 314 mm2 314 mm2 200 mm2
Process Tech 12 nm 12 nm 12 nm 12 nm 16 nm 16 nm 16 nm
TDP 120W 120W 160W 175W 180W 150W 120W
Launch Price $219 $279 $349 $499 $599 $379 $299

So what is a GTX 1660 minus the “Ti”? A hybrid product of sorts, it turns out. The card is based on the same TU116 GPU as the GTX 1660 Ti, and while the Ti features the full version of TU116, this non-Ti version has two of the SMs disabled, bringing the count from 24 to 22. This results in a total of 1408 CUDA cores - down from 1536 with the GTX 1660 Ti. This 128-core drop is not as large as I was expecting from the vanilla 1660, and with the same memory specs the capabilities of this card would not fall far behind - but this card uses the older GDDR5 standard, matching the 8 Gbps speed and 192 GB/s bandwidth of the outgoing GTX 1060, and not the 12 Gbps GDDR6 and 288.1 GB/s bandwidth of the GTX 1660 Ti.

Continue reading our review of the NVIDIA GeForce GTX 1660 graphics card

Manufacturer: EVGA

The EVGA RTX 2060 XC Ultra

While NVIDIA’s new GTX 1660 Ti has stolen much of the spotlight from the RTX 2060 launched at CES, this more powerful Turing card is still an important part of the current video card landscape, though with its $349 starting price it does not fit into the “midrange” designation we have been used to.

EVGA_RTX2060_XC_Ultra_7.jpg

Beyond the price argument, as we saw with our initial review of the RTX 2060 Founders Edition and subsequent look at 1440p gaming and overclocking results, the RTX 2060 far exceeds midrange performance, which made sense given the price tag but created some confusion based on the "2060" naming as this suggested a 20-series replacement to the GTX 1060.

The subsequent GTX 1660 Ti launch provided those outspoken about the price and performance level of the RTX 2060 in relation to the venerable GTX 1060 with a more suitable replacement, leaving the RTX 2060 as an interesting mid-premium option that could match late-2017’s GTX 1070 Ti for $100 less, but still wasn’t a serious option for RTX features without DLSS to boost performance - image quality concerns in the early days of this tech notwithstanding.

EVGA_RTX2060_XC_Ultra_1.jpg

One area certainly worth exploring further with the RTX 2060 is overclocking, as it seemed possible that a healthy OC had the potential to meet RTX 2070 performance, though our early efforts were conducted using NVIDIA’s Founders Edition version, which just one month in now seems about as common as a pre-cyclone cover version of the original Sim City for IBM compatibles (you know, the pre-Godzilla litigation original?). LGR-inspired references aside, let's look at the card EVGA sent us for review.

Continue reading our review of the EVGA GeForce RTX 2060 XC Ultra graphics card

Manufacturer: NVIDIA

The TU116 GPU and First Look at Cards from MSI and EVGA

NVIDIA is introducing the GTX 1660 Ti today, a card build from the ground up to take advantage of the new Turing architecture but without real-time ray tracing capabilities. It seems like the logical next step for NVIDIA as gamers eager for a current-generation replacement to the popular GTX 1060, and who may have been disappointed with the launch of the RTX 2060 because it was priced $100 above the 1060 6GB, now have something a lot closer to a true replacement in the GTX 1660 Ti.

There is more to the story of course, and we are still talking about a “Ti” part and not a vanilla GTX 1660, which presumably will be coming at some point down the road; but this new card should make an immediate impact. Is it fair to say that the GTX 1660 Ti the true successor to the GTX 1060 that we might have assumed the RTX 2060 to be? Perhaps. And is the $279 price tag a good value? We will endeavor to find out here.

1660_Ti_Boxes.jpg

RTX: Off

It has been a rocky start for RTX, and while some might say that releasing GTX cards after the fact represents back-peddling from NVIDIA, consider the possibility that the 2019 roadmap always had space for new GTX cards. Real-time ray tracing does not make sense below a certain performance threshold, and it was pretty clear with the launch of the RTX 2060 that DLSS was the only legitimate option for ray tracing at acceptable frame rates. DLSS itself has been maligned of late based on a questions about visual quality, which NVIDIA has now addressed in a recent blog post. There is clearly a lot invested in DLSS, and regardless of your stance on the technology NVIDIA is going to continue working on it and releasing updates to improve performance and visual quality in games.

As its “GTX” designation denotes, the GeForce GTX 1660 Ti does not include the RT and Tensor Cores that are found in GeForce RTX graphics cards. In order to deliver the Turing architecture to the sub-$300 graphics segment, we must be very thoughtful about the types and numbers of cores we use in the GPU: adding dedicated cores to accelerate Ray Tracing and AI doesn’t make sense unless you can first achieve a certain level of rendering performance. As a result, we chose to focus the GTX 1660 Ti’s cores exclusively on graphics rendering in order to achieve the best balance of performance, power, and cost.

If the RTX 2060 is the real-time ray tracing threshold, then it's pretty obvious that any card that NVIDIA released this year below that performance (and price) level would not carry RTX branding. And here we are with the next card, still based on the latest Turing architecture but with an all-new GPU that has no ray tracing support in hardware. There is nothing fused off here or disabled in software with TU116, and the considerable reduction in die size from the TU106 reflects this.

Continue reading our review of the NVIDIA GeForce GTX 1660 Ti graphics card!

Manufacturer: AMD

Overview and Specifications

After a month-long wait following its announcement during the AMD keynote at CES, the Radeon VII is finally here. By now you probably know that this is the world’s first 7nm gaming GPU, and it is launching today at a price equal to NVIDIA’s GeForce RTX 2080 at $699.

Radeon_VII_Testbench.jpg

The AMD Radeon VII in action on the test bench

More than a gaming card, the Radeon VII is being positioned as a card for content creators as well by AMD, with its 16GB of fast HBM2 memory and enhanced compute capabilities complimenting what should be significantly improved gaming performance compared to the RX Vega 64.

Vega at 7nm

At the heart of the Radeon VII is the Vega 20 GPU, introduced with the Radeon Instinct MI60 and MI50 compute cards for the professional market back in November. The move to 7nm brings a reduction in die size from 495 mm2 with Vega 10 to 331 mm2 with Vega 20, but this new GPU is more than a die shrink with the most notable improvement by way of memory throughput, as this is significantly higher with Vega 20.

2nd_Gen_Vega_Slide.png

Double the HBM2, more than double the bandwidth

While effective memory speeds have been improved only slightly from 1.89 Gbps to 2.0 Gbps, far more impactful is the addition of two 4GB HBM2 stacks which not only increase the total memory to 16GB, but bring with them two additional memory controllers which double the interface width from 2048-bit to 4096-bit. This provides a whopping 1TB (1024 GB/s) of memory bandwidth, up from 483.8 GB/s with the RX Vega 64.

Continue reading our review of the AMD Radeon VII graphics card!

Manufacturer: NVIDIA

Exploring 2560x1440 Results

In part one of our review of the NVIDIA GeForce RTX 2060 graphics card we looked at gaming performance using only 1920x1080 and 3840x2160 results, and while UHD is the current standard for consumer televisions (and an easy way to ensure GPU-bound performance) more than twice as many gamers play on a 2560x1440 display (3.89% vs. 1.42% for 3840x2160) according to Steam hardware survey results.

RTX_2060_Bench.jpg

Adding these 1440p results was planned from the beginning, but time constraints made testing at three resolutions before getting on a plane for CES impossible (though in retrospect UHD should have been the one excluded from part one, and in future I'll approach it that way). Regardless, we now have those 1440p results to share, having concluded testing using the same list of games and synthetic benchmarks we saw in the previous installment.

On to the benchmarks!

PC Perspective GPU Test Platform
Processor Intel Core i7-8700K
Motherboard ASUS ROG STRIX Z370-H Gaming
Memory Corsair Vengeance LED 16GB (8GBx2) DDR4-3000
Storage Samsung 850 EVO 1TB
Power Supply CORSAIR RM1000x 1000W
Operating System Windows 10 64-bit (Version 1803)
Drivers AMD: 18.50
NVIDIA: 417.54, 417.71 (OC Results)

We will begin with Unigine Superposition, which was run with the high preset settings.

Superposition_1440.png

Here we see the RTX 2060 with slightly higher performance than the GTX 1070 Ti, right in the middle of GTX 1070 and GTX 1080 performance levels. As expected so far.

Continue reading part two of our NVIDIA GeForce RTX 2060 review.

Manufacturer: NVIDIA

Formidable Mid-Range

We have to go all the way back to 2015 for NVIDIA's previous graphics card announcement at CES, with the GeForce GTX 960 revealed during the show four years ago. And coming on the heels of this announcement today we have the latest “mid-range” offering in the tradition of the GeForce x60 (or x060) cards, the RTX 2060. This launch comes as no surprise to those of us following the PC industry, as various rumors and leaks preceded the announcement by weeks and even months, but such is the reality of the modern supply chain process (sadly, few things are ever really a surprise anymore).

RTX2060_Box.jpg

But there is still plenty of new information available with the official launch of this new GPU, not the least of which is the opportunity to look at independent benchmark results to find out what to expect with this new GPU relative to the market. To this end we had the opportunity to get our hands on the card before the official launch, testing the RTX 2060 in several games as well as a couple of synthetic benchmarks. The story is just beginning, and as time permits a "part two" of the RTX 2060 review will be offered to supplement this initial look, addressing omissions and adding further analysis of the data collected thus far.

Before getting into the design and our initial performance impressions of the card, let's look into the specifications of this new RTX 2060, and see how it relates to the rest of the RTX family from NVIDIA. We are  taking a high level look at specs here, so for a deep dive into the RTX series you can check out our previous exploration of the Turing Architecture here.

"Based on a modified version of the Turing TU106 GPU used in the GeForce RTX 2070, the GeForce RTX 2060 brings the GeForce RTX architecture, including DLSS and ray-tracing, to the midrange GPU segment. It delivers excellent gaming performance on all modern games with the graphics settings cranked up. Priced at $349, the GeForce RTX 2060 is designed for 1080p gamers, and delivers an excellent gaming experience at 1440p."

RTX2060_Thumbnail.jpg

  RTX 2080 Ti RTX 2080 RTX 2070 RTX 2060 GTX 1080 GTX 1070
GPU TU102 TU104 TU106 TU106 GP104 GP104
GPU Cores 4352 2944 2304 1920 2560 1920
Base Clock 1350 MHz 1515 MHz 1410  MHz 1365 MHz 1607 MHz 1506 MHz
Boost Clock 1545 MHz/
1635 MHz (FE)
1710 MHz/
1800 MHz (FE)
1620 MHz
1710 MHz (FE)
1680 MHz 1733 MHz 1683 MHz
Texture Units 272 184 144 120 160 120
ROP Units 88 64 64 48 64 64
Tensor Cores 544 368 288 240 -- --
Ray Tracing Speed 10 Giga Rays 8 Giga Rays 6 Giga Rays 5 Giga Rays -- --
Memory 11GB 8GB 8GB 6GB 8GB 8GB
Memory Clock 14000 MHz  14000 MHz  14000 MHz 14000 MHz 10000 MHz 8000 MHz
Memory Interface 352-bit GDDR6 256-bit GDDR6 256-bit GDDR6 192-bit GDDR6 256-bit GDDR5X 256-bit GDDR5
Memory Bandwidth 616 GB/s 448 GB/s 448 GB/s 336.1 GB/s 320 GB/s 256 GB/s
TDP 250 W /
260 W (FE)
215W /
225W (FE)
175 W / 185W (FE) 160 W 180 W 150 W
MSRP (current) $1200 (FE)/
$1000
$800 (FE)/
$700
$599 (FE)/ $499 $349 $549 $379

Continue reading our initial review of the NVIDIA GeForce RTX 2060!

Author:
Manufacturer: AMD

Vega meets Radeon Pro

Professional graphics cards are a segment of the industry that can look strange to gamers and PC enthusiasts. From the outside, it appears that businesses are paying more for almost identical hardware when compared to their gaming counterparts from both NVIDIA and AMD. 

However, a lot goes into a professional-level graphics card that makes all the difference to the consumers they are targeting. From the addition of ECC memory to protect against data corruption, all the way to a completely different driver stack with specific optimizations for professional applications, there's a lot of work put into these particular products.

The professional graphics market has gotten particularly interesting in the last few years with the rise of the NVIDIA TITAN-level GPUs and "Frontier Edition" graphics cards from AMD. While lacking ECC memory, these new GPUs have brought over some of the application level optimizations, while providing a lower price for more hobbyist level consumers.

However, if you're a professional that depends on a graphics card for mission-critical work, these options are no replacement for the real thing.

Today we're looking at one of AMD's latest Pro graphics offerings, the AMD Radeon Pro WX 8200. 

DSC05271.JPG

Click here to continue reading our review of the AMD Radeon Pro WX 8200.

Author:
Manufacturer: XFX

Overview

While 2018 so far has contained lots of talk about graphics cards, and new GPU architectures, little of this talk has been revolving around AMD. After having launched their long-awaited Vega GPUs in late 2017, AMD has remained mostly quiet on the graphics front.

As we headed into summer 2018, the talk around graphics started to turn to NVIDIA's next generation Turing architecture, the RTX 2070, 2080, and 2080 Ti, and the subsequent price creeps of graphics cards in their given product segment.

However, there has been one segment in particular that has been lacking any excitement in 2018—mid-range GPUs for gamers on a budget.

DSC05266.JPG

AMD is aiming to change that today with the release of the RX 590. Join us as we discuss the current state of affordable graphics cards.

  RX 590 RX 580 GTX 1060 6GB GTX 1060 3GB
GPU Polaris 30 Polaris 20 GP106 GP106
GPU Cores 2304 2304 1280 1152
Rated Clock 1469 MHz Base
1545 MHz Boost

1257 MHz Base
1340 MHz Boost

1506 MHz Base
1708 MHz Boost
1506 MHz Base
1708 MHz Boost
Texture Units 144 144 80 80
ROP Units 32 32 48 48
Memory 8GB 8GB 6GB 6GB
Memory Clock 8000 MHz 8000 MHz 8000 MHz 8000 MHz
Memory Interface 256-bit 256-bit 192-bit 192-bit
Memory Bandwidth 256 GB/s 256 GB/s 192 GB/s 192 GB/s
TDP 225 watts 185 watts 120 watts 120 watts
Peak Compute 7.1 TFLOPS 6.17 TFLOPS 3.85 TFLOPS (Base) 2.4 TFLOPS (Base)
Process Tech 12nm 14nm 16nm 16nm
MSRP (of retail cards) $239 $219 $249 $209

Click here to continue reading our review of the AMD RX 590!

Author:
Manufacturer: MSI

Overview

With the launch of the GeForce RTX 2070, NVIDIA seems to have applied some pressure to their partners to get SKUs that actually hit the advertised "starting at $499" price. Compared to the $599 Founders Edition RTX 2070, these lower cost options have the potential to bring significantly more value to the consumer, especially taken into account the relative performance levels of the RTX 2070 to the GTX 1080 we observed in our initial review.

Earlier this week, we took a look at the EVGA RTX 2070 Black Edition, but it's not the only card to hit the $499 price range that we've received.

Today, we are taking a look at MSI's low-cost RTX 2070 offering, the MSI RTX 2070 Armor.

DSC05222.JPG

MSI RTX 2070 ARMOR 8G
Base Clock Speed 1410 MHz
Boost Clock Speed 1620 MHz
Memory Clock Speed 14000 MHz GDDR6
Outputs DisplayPort x 3(v1.4) / HDMI 2.0b x 1 / USB Type-C x1 (VirtualLink) / 
Dimensions

12.1 x 6.1 x 1.9 inches (309 x 155 x 50 mm)

Price $499.99

Click here to continue reading our review of the MSI RTX 2070 Armor!

Author:
Manufacturer: NVIDIA

TU106 joins the party

In general, the launch of RTX 20-series GPUs from NVIDIA in the form of the RTX 2080 and RTX 2080 Ti has been a bit of a mixed bag.

While these new products did give us the fastest gaming GPU available, the RTX 2080 Ti, they are also some of the most expensive videos cards ever to launch. With a value proposition that is partially tied to the adoption of new hardware features into games, the reception of these new RTX cards has been rocky.

To say this puts a bit of pressure on the RTX 2070 launch would be an apt assessment. The community wants to see a reason to get excited for new graphics cards, without having to wait for applications to take advantage of the new hardware features like Tensor and RT cores. Conversely, NVIDIA would surely love to see an RTX launch with a bit more praise from the press and community than their previous release has garnered.

The wait is no longer, today we are taking a look at the RTX 2070, the last of the RTX-series graphics cards announced by NVIDIA back in August.

icon.jpg

  RTX 2080 Ti GTX 1080 Ti RTX 2080  RTX 2070 GTX 1080 GTX 1070 RX Vega 64 (Air)
GPU TU102 GP102 TU104 TU106 GP104 GP104 Vega 64
GPU Cores 4352 3584 2944 2304 2560 1920 4096
Base Clock 1350 MHz 1408 MHz 1515 MHz 1410  MHz 1607 MHz 1506 MHz 1247 MHz
Boost Clock 1545 MHz/
1635 MHz (FE)
1582 MHz 1710 MHz/
1800 MHz (FE)
1620 MHz/ 1710 MHz (FE) 1733 MHz 1683 MHz 1546 MHz
Texture Units 272 224 184 144 160 120 256
ROP Units 88 88 64 64 64 64 64
Tensor Cores 544 -- 368 288 -- -- --
Ray Tracing Speed 10 GRays/s -- 8 GRays/s 6 GRays/s -- -- --
Memory 11GB 11GB 8GB 8GB 8GB 8GB 8GB
Memory Clock 14000 MHz  11000 MHz 14000 MHz  14000 MHz 10000 MHz 8000 MHz 1890 MHz
Memory Interface 352-bit G6 352-bit G5X 256-bit G6 256-bit G6 256-bit G5X 256-bit G5 2048-bit HBM2
Memory Bandwidth 616GB/s 484 GB/s 448 GB/s 448 GB/s 320 GB/s 256 GB/s 484 GB/s
TDP 250 W /
260 W (FE)
250 W 215W /
225W (FE)
175 W / 185W (FE) 180 W 150 W 292 W
Peak Compute (FP32) 13.4 TFLOPS / 14.2 TFLOP (FE) 10.6 TFLOPS 10 TFLOPS / 10.6 TFLOPS (FE) 7.5 TFLOPS / 7.9 TFLOPS (FE) 8.2 TFLOPS 6.5 TFLOPS 13.7 TFLOPS
Transistor Count 18.6 B 12.0 B 13.6 B 10.8 B 7.2 B 7.2B 12.5 B
Process Tech 12nm 16nm 12nm 12nm 16nm 16nm 14nm
MSRP (current) $1200 (FE)/
$1000
$699 $800 (FE)/
$700
$599 (FE)/ $499 $549 $379 $499

Click here to continue reading our review of the NVIDIA GeForce RTX 2070!

Author:
Manufacturer: ASUS

Overview

With the release of the NVIDIA GeForce RTX 2080 and 2080 Ti just last week, the graphics card vendors have awakened with a flurry of new products based on the Turing GPUs.

Today, we're taking a look at ASUS's flagship option, the ASUS Republic of Gamers STRIX 2080 Ti.

ASUS ROG STRIX 2080 Ti
Base Clock Speed 1350 MHz
Boost Clock Speed 1665 MHz
Memory Clock Speed 14000 MHz GDDR6
Outputs DisplayPort x 2 (v1.4) / HDMI 2.0b x 2 / USB Type-C x1 (VirtualLink)
Dimensions

12 x 5.13 x 2.13 inches (30.47 x 13.04 x 5.41 cm)

Price $1249.99

DSC05198.JPG

For those of you familiar with the most recent STRIX video cards, the GTX 1080 Ti, and the RX Vega 64, the design of the RTX 2080 Ti will be immediately familiar. The same symmetric triple fan setup is present, contrasted against some of the recent triple fan designs we've seen from other manufacturers with different size fans.

DSC05207.JPG

Just as with the STRIX GTX 1080 Ti, the RTX 2080 Ti version features RGB lighting along the fan shroud of the card. 

Continue reading our review of the ASUS ROG STRIX RTX 2080 Ti!

Author:
Manufacturer: MSI

Our First Look

Over the years, the general trend for new GPU launches, especially GPUs from new graphics architecture is to launch only with the "reference" graphics card designs, developed by AMD or NVIDIA. While the idea of a "reference" design has changed over the years, with the introduction of NVIDIA's Founders Edition cards, and different special edition designs at launch from AMD like we saw with Vega 56 and Vega 64, generally there aren't any custom designs from partners available at launch.

However with the launch of NVIDIA's Turing architecture, in the form of the RTX 2080 and RTX 2080 Ti, we've been presented with an embarrassment of riches in the form of plenty of custom cooler and custom PCB designs found from Add-in Board (AIB) Manufacturers.

Today, we're taking a look at our first custom RTX 2080 design, the MSI RTX 2080 Gaming X Trio.

MSI GeForce RTX 2080 Gaming X Trio
Base Clock Speed 1515 MHz
Boost Clock Speed 1835 MHz
Memory Clock Speed 7000 MHz GDDR6
Outputs DisplayPort x 3 (v1.4) / HDMI 2.0b x 1 / USB Type-C x1 (VirtualLink)
Dimensions

12.9-in x 5.5-in x 2.1-in (327 x 140 x 55.6 mm)

Weight 3.42 lbs (1553 g)
Price $849.99

Introduced with the GTX 1080 Ti, the Gaming X Trio is as you might expect, a triple fan design, that makes up MSI's highest performance graphics card offering.

DSC05188.JPG

Click here to continue reading our review of the MSI GeForce RTX 2080 Gaming X TRio

Author:
Manufacturer: NVIDIA

New Generation, New Founders Edition

At this point, it seems that calling NVIDIA's 20-series GPUs highly anticipated would be a bit of an understatement. Between months and months of speculation about what these new GPUs would be called, what architecture they would be based off, and what features they would bring, the NVIDIA GeForce RTX 2080 and RTX 2080 Ti were officially unveiled in August, alongside the Turing architecture.

DSC05181.JPG

We've already posted our deep dive into the Turing architecture and the TU 102 and TU 104 GPUs powering these new graphics cards, but here's a short take away. Turing provides efficiency improvements in both memory and shader performance, as well as adds additional specialized hardware to accelerate both deep learning (Tensor cores), and enable real-time ray tracing (RT cores).

  RTX 2080 Ti Quadro RTX 6000 GTX 1080 Ti RTX 2080  Quadro RTX 5000 GTX 1080 TITAN V RX Vega 64 (Air)
GPU TU102 TU102 GP102 TU104 TU104 GP104 GV100 Vega 64
GPU Cores 4352 4608 3584 2944 3072 2560 5120 4096
Base Clock 1350 MHz 1455 MHz 1408 MHz 1515 MHz 1620 MHz 1607 MHz 1200 MHz 1247 MHz
Boost Clock 1545 MHz/
1635 MHz (FE)
1770 MHz 1582 MHz 1710 MHz/
1800 MHz (FE)
1820 MHz 1733 MHz 1455 MHz 1546 MHz
Texture Units 272 288 224 184 192 160 320 256
ROP Units 88 96 88 64 64 64 96 64
Tensor Cores 544 576 -- 368 384 -- 640 --
Ray Tracing Speed 10 GRays/s 10 GRays/s -- 8 GRays/s 8 GRays/s -- -- --
Memory 11GB 24GB 11GB 8GB 16GB 8GB 12GB  8GB
Memory Clock 14000 MHz  14000 MHz  11000 MHz 14000 MHz  14000 MHz  10000 MHz 1700 MHz 1890 MHz
Memory Interface 352-bit G6 384-bit G6 352-bit G5X 256-bit G6 256-bit G6 256-bit G5X 3072-bit HBM2 2048-bit HBM2
Memory Bandwidth 616GB/s 672GB/s 484 GB/s 448 GB/s 448 GB/s 320 GB/s 653 GB/s 484 GB/s
TDP 250 W/
260 W (FE)
260 W 250 watts 215W
225W (FE)
230 W 180 watts 250W 292
Peak Compute (FP32) 13.4 TFLOPS / 14.2 TFLOP (FE) 16.3 TFLOPS 10.6 TFLOPS 10 TFLOPS / 10.6 TFLOPS (FE) 11.2 TFLOPS 8.2 TFLOPS 14.9 TFLOPS 13.7 TFLOPS
Transistor Count 18.6 B 18.6B 12.0 B 13.6 B 13.6 B 7.2 B 21.0 B 12.5 B
Process Tech 12nm 12nm 16nm 12nm 12nm 16nm 12nm 14nm
MSRP (current) $1200 (FE)/
$1000
$6,300 $699 $800/
$700
$2,300 $549 $2,999 $499

 

As unusual as it is for them NVIDIA has decided to release both the RTX 2080 and RTX 2080 Ti at the same time, as the first products in the Turing family. 

The TU102-based RTX 2080 Ti features 4352 CUDA cores, while the TU104-based RTX 2080 features 2944, less than the GTX 1080 Ti. Also, these new RTX GPUs have moved to GDDR6 from the GDDR5X we found on the GTX 10-series.

DSC05175.JPG

Click here to continue reading our review of the RTX 2080 and 2080 Ti.

Author:
Manufacturer: NVIDIA

A Look Back and Forward

Although NVIDIA's new GPU architecture, revealed previously as Turing, has been speculated about for what seems like an eternity at this point, we finally have our first look at exactly what NVIDIA is positioning as the future of gaming.

geforce-rtx-2080.png

Unfortunately, we can't talk about this card just yet, but we can talk about what powers it

First though, let's take a look at the journey to get here over the past 30 months or so.

Unveiled in early 2016, Pascal marked by the launch of the GTX 1070 and 1080 was NVIDIA's long-awaited 16nm successor to Maxwell. Constrained by the oft-delayed 16nm process node, Pascal refined the shader unit design original found in Maxwell, while lowering power consumption and increasing performance.

Next, in May 2017 came Volta, the next (and last) GPU architecture outlined in NVIDIA's public roadmaps since 2013. However, instead of the traditional launch with a new GeForce gaming card, Volta saw a different approach.

Click here to continue reading our analysis of NVIDIA's Turing Graphics Architecture

Author:
Manufacturer: NVIDIA

Retesting the 2990WX

Earlier today, NVIDIA released version 399.24 of their GeForce drivers for Windows, citing Game Ready support for some newly released games including Shadow of the Tomb Raider, The Call of Duty: Black Ops 4 Blackout Beta, and Assetto Corsa Competizione early access. 

399-24-changelog.png

While this in and of itself is a normal event, we shortly started to get some tips from readers about an interesting bug fix found in NVIDIA's release notes for this specific driver revision.

f12017.png

Specifically addressing performance differences between 16-core/32-thread processors and 32-core/64-thread processors, this patched issue immediately rang true of our experiences benchmarking the AMD Ryzen Threadripper 2990WX back in August, where we saw some games resulting in frames rates around 50% slower than the 16-core Threadripper 2950X. 

This particular patch note lead us to update out Ryzen Threadripper 2990WX test platform to this latest NVIDIA driver release and see if there were any noticeable changes in performance.

The full testbed configuration is listed below:

Test System Setup
CPU

AMD Ryzen Threadripper 2990WX

Motherboard ASUS ROG Zenith Extreme - BIOS 1304
Memory

16GB Corsair Vengeance DDR4-3200

Operating at DDR4-2933

Storage Corsair Neutron XTi 480 SSD
Sound Card On-board
Graphics Card NVIDIA GeForce GTX 1080 Ti 11GB
Graphics Drivers NVIDIA 398.26 and 399.24
Power Supply Corsair RM1000x
Operating System Windows 10 Pro x64 RS4 (17134.165)

Included at the end of this article are the full results from our entire suite of game benchmarks from our CPU testbed, but first, let's take a look at some of the games that provided particularly bad issues with the 2990WX previously.

The interesting data points for this testing are the 2990WX scores across both the driver revision we tested across every CPU, 398.26, as well as the results from the 1/4 core compatibility mode, and the Ryzen Threadripper 2950X. From the wording of the patch notes, we would expect gaming performance between the 16-core 2950X and the 32-core 2990WX to be very similar.

Grand Theft Auto V

gtav-new.png

GTA V was previously one of the worst offenders in our original 2990WX testing, with the frame rate almost halving compared to the 2950X.

However, with the newest GeForce driver update, we see this gap shrinking to around a 20% difference.

Continue reading our revised look at Threadripper 2990WX gaming performance!!

Author:
Manufacturer: AMD

Your Mileage May Vary

One of the most interesting things going around in the computer hardware communities this past weekend was the revelation from a user named bryf50 on Reddit that they somehow had gotten his FreeSync display working with his NVIDIA GeForce GPU. 

For those of you that might not be familiar with the particular ins-and-outs of these variable refresh technologies, getting FreeSync displays to work on NVIDIA GPUs is potentially a very big deal.

While NVIDIA GPUs support the NVIDIA G-SYNC variable refresh rate standard, they are not compatible with Adaptive Sync (the technology on which FreeSync is based) displays. Despite Adaptive Sync being an open standard, and an optional extension to the DisplayPort specification, NVIDIA so far has chosen not to support these displays.

However, this provides some major downsides to consumers looking to purchase displays and graphics cards. Due to the lack of interoperability, consumers can get locked into a GPU vendor if they want to continue to use the variable refresh functionality of their display. Plus, Adaptive-Sync/FreeSync monitors, in general, seem to be significantly more inexpensive for similar specifications.

01.jpg

Click here to continue reading our exploration into FreeSync support on NVIDIA GPUs!

 

Author:
Manufacturer: ASUS

A long time coming

To say that the ASUS ROG Swift PG27UQ has been a long time coming is a bit of an understatement. In the computer hardware world where we are generally lucky to know about a product for 6-months, the PG27UQ is a product that has been around in some form or another for at least 18 months.

Originally demonstrated at CES 2017, the ASUS ROG Swift PG27UQ debuted alongside the Acer Predator X27 as the world's first G-SYNC displays supporting HDR. With promised brightness levels of 1000 nits, G-SYNC HDR was a surprising and aggressive announcement considering that HDR was just starting to pick up steam on TVs, and was unheard of for PC monitors. On top of the HDR support, these monitors were the first announced displays sporting a 144Hz refresh rate at 4K, due to their DisplayPort 1.4 connections.

However, delays lead to the PG27UQ being displayed yet again at CES this year, with a promised release date of Q1 2018. Even more slippages in release lead us to today, where the ASUS PG27UQ is available for pre-order for a staggering $2,000 and set to ship at some point this month.

In some ways, the launch of the PG27UQ very much mirrors the launch of the original G-SYNC display, the ROG Swift PG278Q. Both displays represented the launch of an oft waited technology, in a 27" form factor, and were seen as extremely expensive at their time of release.

DSC05009.JPG

Finally, we have our hands on a production model of the ASUS PG27UQ, the first monitor to support G-SYNC HDR, as well as 144Hz refresh rate at 4K. Can a PC monitor really be worth a $2,000 price tag? 

Continue reading our review of the ASUS ROG PG27UQ G-SYNC HDR Monitor!

Author:
Manufacturer: Intel

System Overview

Announced at Intel's Developer Forum in 2012, and launched later that year, the Next Unit of Computing (NUC) project was initially a bit confusing to the enthusiast PC press. In a market that appeared to be discarding traditional desktops in favor of notebooks, it seemed a bit odd to launch a product that still depended on a monitor, mouse, and keyboard, yet didn't provide any more computing power.

Despite this criticism, the NUC lineup has rapidly expanded over the years, seeing success in areas such as digital signage and enterprise environments. However, the enthusiast PC market has mostly eluded the lure of the NUC.

Intel's Skylake-based Skull Canyon NUC was the company's first attempt to cater to the enthusiast market, with a slight stray from the traditional 4-in x 4-in form factor and the adoption of their best-ever integrated graphics solution in the Iris Pro. Additionally, the ability to connect external GPUs via Thunderbolt 3 meant Skull Canyon offered more of a focus on high-end PC graphics. 

However, Skull Canyon mostly fell on deaf ears among hardcore PC users, and it seemed that Intel lacked the proper solution to make a "gaming-focused" NUC device—until now.

8th Gen Intel Core processor.jpg

Announced at CES 2018, the lengthily named 8th Gen Intel® Core™ processors With Radeon™ RX Vega M Graphics (henceforth referred to as the code name, Kaby Lake-G) marks a new direction for Intel. By partnering with one of the leaders in high-end PC graphics, AMD, Intel can now pair their processors with graphics capable of playing modern games at high resolutions and frame rates.

DSC04773.JPG

The first product to launch using the new Kaby Lake-G family of processors is Intel's own NUC, the NUC8i7HVK (Hades Canyon). Will the marriage of Intel and AMD finally provide a NUC capable of at least moderate gaming? Let's dig a bit deeper and find out.

Click here to continue reading our review of the Intel Hades Canyon NUC!

Manufacturer: Microsoft

O Rayly? Ya Rayly. No Ray!

Microsoft has just announced a raytracing extension to DirectX 12, called DirectX Raytracing (DXR), at the 2018 Game Developer's Conference in San Francisco.

microsoft-2015-directx12-logo.jpg

The goal is not to completely replace rasterization… at least not yet. This effect will be mostly implemented for effects that require supplementary datasets, such as reflections, ambient occlusion, and refraction. Rasterization, the typical way that 3D geometry gets drawn on a 2D display, converts triangle coordinates into screen coordinates, and then a point-in-triangle test runs across every sample. This will likely occur once per AA sample (minus pixels that the triangle can’t possibly cover -- such as a pixel outside of the triangle's bounding box -- but that's just optimization).

microsoft-2018-gdc-directx12raytracing-rasterization.png

For rasterization, each triangle is laid on a 2D grid corresponding to the draw surface.
If any sample is in the triangle, the pixel shader is run.
This example shows the rotated grid MSAA case.

A program, called a pixel shader, is then run with some set of data that the GPU could gather on every valid pixel in the triangle. This set of data typically includes things like world coordinate, screen coordinate, texture coordinates, nearby vertices, and so forth. This lacks a lot of information, especially things that are not visible to the camera. The application is free to provide other sources of data for the shader to crawl… but what?

  • Cubemaps are useful for reflections, but they don’t necessarily match the scene.
  • Voxels are useful for lighting, as seen with NVIDIA’s VXGI and VXAO.

This is where DirectX Raytracing comes in. There’s quite a few components to it, but it’s basically a new pipeline that handles how rays are cast into the environment. After being queued, it starts out with a ray-generation stage, and then, depending on what happens to the ray in the scene, there are close-hit, any-hit, and miss shaders. Ray generation allows the developer to set up how the rays are cast, where they call an HLSL instrinsic instruction, TraceRay (which is a clever way of invoking them, by the way). This function takes an origin and a direction, so you can choose to, for example, cast rays only in the direction of lights if your algorithm was to, for instance, approximate partially occluded soft shadows from a non-point light. (There are better algorithms to do that, but it's just the first example that came off the top of my head.) The close-hit, any-hit, and miss shaders occur at the point where the traced ray ends.

To connect this with current technology, imagine that ray-generation is like a vertex shader in rasterization, where it sets up the triangle to be rasterized, leading to pixel shaders being called.

microsoft-2018-gdc-directx12raytracing-multibounce.png

Even more interesting – the close-hit, any-hit, and miss shaders can call TraceRay themselves, which is used for multi-bounce and other recursive algorithms (see: figure above). The obvious use case might be reflections, which is the headline of the GDC talk, but they want it to be as general as possible, aligning with the evolution of GPUs. Looking at NVIDIA’s VXAO implementation, it also seems like a natural fit for a raytracing algorithm.

Speaking of data structures, Microsoft also detailed what they call the acceleration structure. Each object is composed of two levels. The top level contains per-object metadata, like its transformation and whatever else data that the developer wants to add to it. The bottom level contains the geometry. The briefing states, “essentially vertex and index buffers” so we asked for clarification. DXR requires that triangle geometry be specified as vertex positions in either 32-bit float3 or 16-bit float3 values. There is also a stride property, so developers can tweak data alignment and use their rasterization vertex buffer, as long as it's HLSL float3, either 16-bit or 32-bit.

As for the tools to develop this in…

microsoft-2018-gdc-PIX.png

Microsoft announced PIX back in January 2017. This is a debugging and performance analyzer for 64-bit, DirectX 12 applications. Microsoft will upgrade it to support DXR as soon as the API is released (specifically, “Day 1”). This includes the API calls, the raytracing pipeline resources, the acceleration structure, and so forth. As usual, you can expect Microsoft to support their APIs with quite decent – not perfect, but decent – documentation and tools. They do it well, and they want to make sure it’s available when the API is.

ea-2018-SEED screenshot (002).png

Example of DXR via EA's in-development SEED engine.

In short, raytracing is here, but it’s not taking over rasterization. It doesn’t need to. Microsoft is just giving game developers another, standardized mechanism to gather supplementary data for their games. Several game engines have already announced support for this technology, including the usual suspects of anything top-tier game technology:

  • Frostbite (EA/DICE)
  • SEED (EA)
  • 3DMark (Futuremark)
  • Unreal Engine 4 (Epic Games)
  • Unity Engine (Unity Technologies)

They also said, “and several others we can’t disclose yet”, so this list is not even complete. But, yeah, if you have Frostbite, Unreal Engine, and Unity, then you have a sizeable market as it is. There is always a question about how much each of these engines will support the technology. Currently, raytracing is not portable outside of DirectX 12, because it’s literally being announced today, and each of these engines intend to support more than just Windows 10 and Xbox.

Still, we finally have a standard for raytracing, which should drive vendors to optimize in a specific direction. From there, it's just a matter of someone taking the risk to actually use the technology for a cool work of art.

If you want to read more, check out Ryan's post about the also-announced RTX, NVIDIA's raytracing technology.