NVIDIA Officially Announces Turing GPU Architecture at SIGGRAPH 2018

Subject: General Tech | August 13, 2018 - 07:43 PM |
Tagged: turing, siggraph 2018, rtx, quadro rtx 8000, quadro rtx 6000, quadro rtx 5000, quadro, nvidia

Today at the professional graphics-focused SIGGRAPH conference, NVIDIA's Jen-Hsun Huang has unveiled details on their much-rumored next GPU architecture, codenamed Turing.

View Full Size

At the core of the Turing architecture are what NVIDIA is referring these as two "engines"– one for accelerating Ray Tracing, and the other for accelerating AI Inferencing.

The Ray Tracing units are called RT cores and are not to be confused with the announcement of NVIDIA RTX technology for real-time ray-tracing that we saw at GDC this year. There, NVIDIA was using their Optix AI-powered denoising filter to clean up ray-traced images, allowing them to save on rendering resources, but the actual ray-tracing was still being done on the GPU cores itself.

Now, these RT cores will perform the ray calculations themselves at what NVIDIA is claiming is up to 10 GigaRays/second, or up to 25X the performance of the current Pascal architecture.

View Full Size

Just like we saw in the Volta-based Quadro GV100, these new Quadro RTX cards will also feature Tensor Cores for deep learning acceleration. It is unclear if these tensor cores remain unchanged from what we saw in Volta or not.

In addition to the RT Cores and Tensor Units, Turing also features an all-new design for the tradition Streaming Multiprocessor (SM) GPU units. Changes include an integer execution unit executing in parallel with the floating point datapath, and a new unified cache architecture with double the bandwidth of the previous generation.

NVIDIA claims these changes combined with the up to 4,608 available CUDA cores in the highest configuration will enable up to 16 TFLOPS and 16 trillion integer operations per second.

View Full Size

Alongside the announcement of the Turing Architecture, NVIDIA unveiled the Quadro RTX 5000, 6000, 8000-series products, due in Q4 2018.

In addition to the announcements at SIGGRAPH tonight, NVIDIA is expected to announce the consumer, GeForce products featuring the Turing architecture next week at an event in Germany

PC Perspective is at both SIGGRAPH and will be at NVIDIA's event in Germany next week so stay tuned for more details!

Source: NVIDIA

August 13, 2018 | 08:44 PM - Posted by Raja Koduri (INTEL) (not verified)

Realy Poor RTG & AMD VEGA Technology ^^ (Intel lover)

August 14, 2018 | 05:16 AM - Posted by Power (not verified)

It is niche product so developers can get familiar with the architecture. The real deal will be made using 7nm lithography.

August 13, 2018 | 08:51 PM - Posted by Steven Petersen (not verified)

AMD had Ray Tracing in HD 2900XT but no one remembers that... It's been a thing for a long time. I do believe AMD developed it...

August 13, 2018 | 09:01 PM - Posted by ipkh

Looks good. Just the cut down parts from The TitanV and Volta Cards. Looks like they only announced quadro series cards, but some sites went apeshit over the 6x rendering performance metric (ray tracing I assume) and suggested they ate gaming cards.

August 13, 2018 | 09:25 PM - Posted by 50004000 (not verified)

The cutdown version of the RTX 5000 will more than likely be the consumer 1180/2080.

August 13, 2018 | 10:27 PM - Posted by RayTracingIsAComputeWorkloadTheMoreShaderCoresTheBetter (not verified)

Ray Tracing is a compute workload traditionally done on on CPU cores and GPUs have been ued for Ray Tracing acceleration via CUDA and OpenCL for some time now. AMD's Radeon ProRender does Ray Tracing, including Ray Tracing mixed with rasterization. AMD also does heterogeneous rendering on the CPU and GPU. See this slide presentation (1) from GDC 2018.

So Vega 20 at 7nm and that compute hardware is been ready to accelerate Rays on Vega's nCUs for a good long while.

AMD's got the shader cores in excess for Ray tracing on its GPUs even more Shader cores than Nvidia on AMD's consumer GPU offerings. Now Nvidia is upping the Shader core counts and look at the size of those dies based on TSMC's 12nm and Vega 20 will be on 7nm so the dies are going to be smaller and AMD can double them up on a single PCIe card and get more shader core that way.

Nvidia needs to go in greater detail concerning their Ray Tracing cores but Vega's nCU compute cores are already made for such compute oriented workloads and Vega's Explicit Primitive Shader hardware can also be made use of by Graphics software. Implicit primitive shaders where canceled for gaming workloads but the Explicit Primitive Shader hardware is still there for new software to directly target.

(1)

"[2018 GDC] Real-Time Ray-Tracing Techniques for Integration into Existing Renderers"

https://www.slideshare.net/takahiroharada/2018-gdc-realtime-raytracing-t...

August 13, 2018 | 10:44 PM - Posted by Nick W (not verified)

Jesus the AMD fans are out tonight. I love AMD but I really doubt they will be able to compete here.

August 13, 2018 | 11:27 PM - Posted by DoofusGamerRectousNeedNotApply (not verified)

Yes for AMD on the Price/Performance metrics they will compete and this is not about any damn gaming workloads SIGGRAPH is about prifessional workloads. And that top end Quadro RTX SKU is going to cost $10,000 so that's compared to what AMD will be charging for Vega 20 in the Single and Dual GPU dies on a PCIe card variants. Vega 20 is getting some AI instructions added to Vega's ISA so that and the Shader counts on Vega at 7nm are still unknown.

All that high clocked GDDR6 at its top rated speed is going to suck some juice compared to HBM2 that's clocked lower over that much wider HBM2 interface. So Vega 20 at 7nm compared to Turing at 12nm(?) on those jumbo sized Nvidia dies. And Nvidia's upping the shader counts also just like AMD has had for a good while so the power used on Nvidia's cards is going to be higher this time around. Vega 20 is still going to be using HBM2 and that's going to not suck as much juice and GDDR6.

It's not about Gaming Fans as that's worthless in the Professional markets. We are talking about Animations and Professional Graphics workloads not some dunderheaded gamer down in Mom's basement whiling his life away playing games when should be Getting A Job.

August 14, 2018 | 01:31 AM - Posted by arseni

Will there be a meetup in Cologne?
And are you planning on attending Gamescom after the Nvidia event? Considering you're already there might catch that one as well.

August 14, 2018 | 04:48 PM - Posted by James

It sounds like an AMD card with the RTX name. Perhaps because of RTG (Radeon Technologies Group). I guess they want to push raytracing heavily, which is a good way to waste a lot of processing power for anlittle bit of improvement in visual quality. It is a huge die, so it isn't going to be cheap.

August 14, 2018 | 06:29 PM - Posted by RaysIsPopularNowThatBigBucksJHHisOnboardWithRayTracing (not verified)

Well for gaming that visual quality is not going to be noticed as much at higher FPS. And Nvidia gets their leading FPS metrics by having more Raster Operations Piplines on their GP102 based GTX 1080 Ti SKUs with 88 out of GP102's 96 available ROPs.

But if ray tracing done on the Tensor Cores can take some of the stress off of the ROP's workload then that could help AMD also on Vega using any spare nCU shader cores for matrices math calculations or Ray Tracing calculations. Tensor Cores/Processors are nothing but matrices math units and all that can be done on any GPUs shader cores also, in addition to using CPU cores where Ray Tracing was traditionally done.

Vega's GPU micro-arch is getting some AI oriented ISA extentions on Vega 20 along with the 7nm die shrink and probably other tweaks. But current Vega 10 based Radeon 56/64 based SKUs could probably benifit also and if that can take some lighting effect generation stress off of Vega 10's limited to 64 ROP's max Raster Operations Pipeline's workload then Vega 10 based SKUs may just get some FPS metrics improvments via Ray Tracing also.

That's still not going to help that Vega 10 design outpace the GP102 Based GTX 1080Ti's FPS metrics just look at the Pixel Fill rates over at TechPowerUp's GPU database for Vega 56/64, and even the GTX 1080(Based on the GP104 silicon), that only have 64 ROPs respectively so the GTX 1080Ti beats the GTX 1080 also in pixel fill rates.

The Vega GPU Micro-Arch is not to blame for Vega's lesser showing in gaming relative to the GTX 1080Ti's higher Pixel Fill Rates from those 88 ROPs that directly translates to higher FPS numbers. AMD will most definitely have to engineer some Ray Tracing cores of its own if that can not be efficiently done on Vega's compute cores already. AMD's has to get Tensor Processing cores anyways for the AI market so that's just some dedicated matrices math units anyways.

Don't blame the Vega GPU micro-arch for AMD's decision to forgo ROPs for more Shader cores on that one Vega 10 base die tapeout, AMD could do a New Vega die tapeout with more ROPs and compete with the GTX 1080Ti, but Nvidia is moving up to Turing so AMD needs to move on to Navi.

Ray Tracing is the better method for lighting calculations that produce more realistic Shadows and Reflections and refractions and other illumination effects. Just look at that PowerVR Wizard GPU IP for in hardware ray tracing and those articles are listed on PCPer. Nvidia just has more Money to pay to get folks onboard with Ray Tracing, a whole lot more than Imagination Technologies ever had. So now that Nvidia is pushng limited Ray Tracing everybody will have to get with that Ray Tracing going forward.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.