3DMark "Port Royal" DLSS Update Released

Subject: Graphics Cards | February 5, 2019 - 11:42 PM |
Tagged: rtx, nvidia, Futuremark, DLSS, 3dmark

If you have an RTX-based graphics card, then you can now enable Deep Learning Super Sampling (DLSS) on 3DMark’s Port Royal benchmark. NVIDIA has also published a video of the benchmark running at 1440p alongside Temporal Anti-Aliasing (TAA).

Two things stand out about the video: Quality and Performance.

On the quality side: holy crap it looks good. One of the major issues with TAA is that it makes everything that’s moving somewhat blurry and/or otherwise messed up. For DLSS? It’s very clear and sharp, even in motion. It is very impressive. It also seems to behave well when there are big gaps in rendered light intensity, which, in my experience, can be a problem for antialiasing.

On the performance side, DLSS was shown to be significantly faster than TAA – seemingly larger than the gap between TAA and no anti-aliasing at all. The gap is because DLSS renders at a lower resolution automatically, and this behavior is published on NVIDIA’s website. (Ctrl+F for “to reduce the game’s internal rendering resolution”.)

Update on Feb 6th @ 12:36pm EST:

Apparently there's another mode, called DLSS 2X, that renders at native resolution. It won't have the performance boost over TAA, but it should have slightly higher rendering quality. I'm guessing it will be especially noticeable in the following situation.

End of Update.

While NVIDIA claims that it shouldn’t cause a noticeable image degradation, I believe I can see an example (in the video and their official screenshots) where the reduced resolution causes artifacts. If you look at the smoothly curving surfaces on the ring under the ship (as the camera zooms in just after 59s) you might be able to see a little horizontal jagged or almost Moiré effect. While I’m not 100% sure that it’s caused by the forced dip in resolution, it doesn’t seem to appear on the TAA version. If this is an artifact with the lowered resolution, I’m curious whether NVIDIA will allow us to run at the native resolution and still perform DLSS, or if the algorithm simply doesn’t operate that way.

View Full Size

NVIDIA's Side-by-Side Sample with TAA

View Full Size

NVIDIA's Side-by-Side Sample with DLSS

View Full Size

DLSS with artifacts pointed out

Image Credit: NVIDIA and FutureMark. Source.

That said, the image quality of DLSS is significantly above TAA. It’s painful watching an object move smoothly on a deferred rendering setup and seeing TAA freak out just a little to where it’s noticeable… but not enough to justify going back to a forward-rendering system with MSAA.

Source: NVIDIA

Video News

February 6, 2019 | 04:29 AM - Posted by Luis Ferreira (not verified)

Looks mighty impressive, exactly what ray tracing needed, more performance.

February 6, 2019 | 11:29 AM - Posted by Spunjji

Who died and made TAA king, though? They keep putting out these forced comparisons but TAA was always a bad option. Nvidia are throwing the fight in favour of their technology and people keep failing to call them out on this.

DLSS also totally messes up the blur around the edges of the landing gear in the example screenshots above. Honestly that stood out to me even more than the jaggies - it looks terrible, like it's trying to interpolate a hard edge transition from what is supposed to be a soft edge.

DLSS is quite impressive but it is *not* good enough for them to get away with claiming 1440p output when they're rendering at less than 1080p. I'd like to see a comparison between 1440p no AA, 1440p FXAA, "1440p" DLSS and 1080p in Port Royal.

February 7, 2019 | 05:50 AM - Posted by Othertomperson (not verified)

TAA is the main reason SLI benchmarks poorly too. Reviewers never turn it off and it breaks AFR. I wonder if it’s simething Nvidia push to make buying a £1200 GPU seem better than buying a second £550 GPU for better performance.

February 6, 2019 | 01:21 PM - Posted by TensorCoresMoreThanRayTracingCoresAreNeededByEveryone (not verified)

"If you have an RTX-based graphics card"

Well technically a Volta GPU has Tensor Cores So not only can you Train the DLSS AI on Volta's Tensor Core's you can also run the Trained DLSS AI on those very same Tensor Cores.

Turing's RTX comprises Both Ray Tracing Cores(Volta Lacks RT cores) but Turing is not the first Nvidia GPU generation that supports Tensor Cores or can be able to host trained AIs if needed.

Now Intel has released an Open Source AI based Denoising API/Libraries that can Denoise Ray traced output via x86 based CPU cores. And Blender 3D's Denoiser, though not AI based AFAK, will use both CPUs and GPUs to run its denoising.

Ray Tracing workloads can be accelerated on any GPU's shader cores also so Nvidia's pre Turing GPUs and AMD's GCN 1.1 or later GPUs will use the GPU for Cycles rendering on Blender 3D.

And Vega 20 has some AI related ISA extentions that Vega 10 lacks and those are going to be of help also using AMD's Pro Render plugins that will also have some support for DLSS Like functionality and AI based Denoising functionality based AI libraries.

The big part of Nvidia's DLSS is that the AI that's running on Turings' tensor cores was not trained on Turing's Tensor core but instead trained on Cluster computing systems with Hundrends of Volta/Tesla GPUs. So that trained AI was given a set of super high resolution Gaiming asets specific to that game's assets and the AI was trained to extrapolate the games lower resolution assets in a trained AI manner to look closer to the super high resolution images used on the computing Clusters/Volta-Tesla AI training platform to train the AI that runs on Turing's Tensor Cores.

Certianly the first IP that AMD needs to get on its GPUs that Nvidia already has is the Tensor Core IP and not the Ray Tracing IP just yet. AMD's Console customers will want Trained AIs running on Tensor Core for AI based Upscaling more than any Ray Tracing ability that the gaming industry will take its time adopting. Vega 20 will have those Vega ISA extentions that are there for AI usage both for Training AIs and for Running the Trained AIs.

AMD needs some DLSS like AI based super sampling of its own for gaming rinning on tensor cores because with DLSS enabled on Nvidia cards the frame rates go up and not down like when the Ray Tracing IP is enabled on Nvidia's Turing SKUs in games. AA/sampling is really a computational intensive task that slows down raster oriented games and Nvidia's DLSS using that Trained AI reduces the stress on the shader cores and moves that onto the tensor cores where the Traind AI makes efficient work of the task, way more efficient than any shader core based AA/Sampling methods.

February 6, 2019 | 01:58 PM - Posted by Rocky1234 (not verified)

DLSS is all fine & dandy until you figure in that it only works on supported games or software as in Nvidia controls where and how it works. So something like TAA and other's just work on everything because it is not 100% controlled by Nvidia.

If DLSS was just an option you could tick and have it work then maybe it could be considered a great option but as it sits this tech only works if nvidia wants to to work in any given title oh and it also only works on Nvidia cards that are RTX and maybe Volta. So a closed off tech that only a select few will be able to use and even fewer because of the lack of support in game titles.

What we need is a option like this that will work on all current and new titles that will also not only work on one set of hardware. This is the only way things like this will moved forward and be adopted into the market as a tech worth while using. I am sure AMD will come out with something and it will be open for everyone to use and then down the road Nvidia will be forced to factor it into their own products like they did with FreeSync and then had the gull to say oh it just does not work right for most monitors.

Hey Nvidia maybe it is how your hardware is trying to interact with the tech and not the fault of the tech itself..maybe work out the problem in your hardware and keep the mouth shut until you fixed your own problem lets not be another Apple here and blame everyone else for your lack of know how on certain things.

February 6, 2019 | 04:00 PM - Posted by Scott Michaud

I normally agree, but trained AI is a bit of a special case. It is not just an algorithm embodied as software or hardware, but also a collection of data that is used as a template.

Who would manage the collection of training data that could be used by all graphics vendors?

February 6, 2019 | 05:53 PM - Posted by ItsThemAdsAndAdScriptsWhatIsEatingTheRAM (not verified)

Oh come on you can do the training on any GPUs/CPUs that have tensorflow libraries svailable and Nvidia does not control that.

You could do plenty of AI training with any of the Maxwell to Pascal or GCN generations of GPUs. All Nvidia is doing is getting the high resoultion assets from the game's maker and doing the learning on its CPU Clusters and Volta GPU accelerators.

And that process of training can be done on any clusters of CPUs and or GPUs that have tensor flow libraries available.

You Train the AI on the high resolution images and you can run the Trained AI on any CPU, GPU that has the standard Tensor Flow libraries available.

The process of running the trained is going to go a lttile quicker on dedicated Tensor Cores but even on Pascal or Vega GPUs that process is quicker than on an consumer CPU cores. Vega 20 also has an extended ISA that supports more Matrix Related instructions but Nvidia's Volta and Turing dedicated Tensor cores are going to be a bit more efficient at running any trained AI based DLSS/DLSS like operations, ditto for the training process unless you make use of Google's Tensor cores for your training needs.

Nvidia's is not the Only GPU maker that the Games developers work with and Even M$ has invested in that sort of research for its new generations console development. The academic researc on the subject of AI based image processing, including Upscaling with high resolution images are numerous.

Here is an article from Photo District News(1) describing the process of how that's done.


"How AI “Learns” to Upscale Low-Resolution Images"


February 6, 2019 | 06:54 PM - Posted by Scott Michaud

I'm not talking about the process of creating the training data. I'm talking about who owns the trained data. Who will train the data for an open solution? It can be done on any hardware, but who will, and how will they license it?

So, yes, anyone could create a bunch of upscaled images and process it down to some training set. But who will? Unlike other AA solutions, that training data is required as well as the hardware and software.

February 6, 2019 | 09:09 PM - Posted by ReallyNvidiaIsNotTheIPandPatentLeaderCompareToOthers (not verified)

The Images and Textures are the property of the games companies not Nvidia.

Nvidia trained that AI on their Server CPUs/Volta processors but it does not have rights to any game company data and Nvidia may not have even developed the methods that are used to do the training as most of that may be Open Sourced and available to all or developed by some academic/other institution and Nvidia has to obtain the license rights to use it just like everyone else. And Nvidia as well as others will be constantly refining the Training process and the Trained AIs as that's always going to be an ongoing process for any one making use of AI based image processing techniques.

CUDA is just another programming language/environment just like OpenCL and other languages and environments/APIs. Nvidia is not the only entity that has the resources to do such research and IBM, Apple, Google, Microsoft, Sony and others have on their own ongoing R&D departments that produce a lot more Digital Image processing patent IP than Nvidia. Sony has tons of That sort of Image processing Image enhansment IP.

Maybe even Nvidia's Ray Tracing cores are not that much different from Imagination Technologies PowerVR Wizard Hardware based Ray Tracing IP that predated Nvidia's Ray Tracing IP by some years. And Tensor Cores were around long before Nvidia decided to include that IP in their GPU offerings. Anyone with Deep pockets can probably walk up to Imagination Technologies and come to some licensing agreements that are forever restricted under NDAs and other such restrictive agreemsnts.

Companies like Nvidia and Intel/others are big users of third party IP and an example of that is Intel's Hyperthreading(SMT) that came from academic sources and was refined by DEC before DEC was acquired and Intel got their hands on that IP(DEC's implementation of that IP) Via Compaq/HP:

"While multithreading CPUs have been around since the 1950s, simultaneous multithreading was first researched by IBM in 1968 as part of the ACS-360 project.[2] The first major commercial microprocessor developed with SMT was the Alpha 21464 (EV8). This microprocessor was developed by DEC in coordination with Dean Tullsen of the University of California, San Diego, and Susan Eggers and Henry Levy of the University of Washington. The microprocessor was never released, since the Alpha line of microprocessors was discontinued shortly before HP acquired Compaq which had in turn acquired DEC. Dean Tullsen's work was also used to develop the Hyper-threading (Hyper-threading technology or HTT) versions of the Intel Pentium 4 microprocessors, such as the "Northwood" and "Prescott". (1)

Intel most certianly does not own the rights Simultaneous Multithreading(SMT) as Intel only owns the IP rights to Intel's implementation of SMT, Branded Under the Trade Name Hyperthreading(TM). And even there certian parts of Intel's/others implimantation of SMT there still may be some licensing required from the University of California and the University of Washington/others.

Nvidia obtains its employees from the very same Universities as everyone else and there are loads of Government Funded grants(DARPA, ARPA, NASA Others Government Agencies) out there that fund a great many University Researchers on just the same sorts of Image Refinement and upscaling IP. The NRO is a big user of that as is NASA and their various interplanetary research divisions. Ditto for the Military and their Terabucks pocket books.

One thing is for certian both Nvidia and Intel, others employ marketing departments that actively attempt to obsfucate the actual sources behind the technology that they like to claim that they created.

There is a few bits of GPU IP that Nvidia claimed was theirs and Nvidia tried to sue Samsung/Others but at the deposition and during the trial Nvidia removed some of their patents from contention least Nvidia lose the patents on closer eximination beacuse of prior art that the USPTO's lackluster work missed in granting some of those Nvidia patents in the first palace.

The world of Patents and IP rights is replete with such occurences and company marketing departments coming up with Trade Names like Hyperthreading, FreeSync, G-Sync! And the gaming market consumer base is a prime example of some of the least sophisticated consumers on the planet! So it does not take the upper upper echelons of the marketing world to easily fool the bog standard gaming consumer.


"Simultaneous multithreading"


February 7, 2019 | 06:13 PM - Posted by Frenchfries (not verified)

How about the game publisher. Hahahahaha, good one I know but realistically if they do it there's no licensing issues as they own the high res assets already and it's in their interest to make sure their game runs well on lots of hardware.

The training wont eat too much into their budget and it gives them another checkbox.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.