NVIDIA GeForce 418.91 WHQL Driver Enables DLSS in Battlefield V and Metro Exodus

Subject: Graphics Cards | February 13, 2019 - 12:17 PM |
Tagged: whql, rtx, raytracing, nvidia, Metro Exodus, graphics, gpu, geforce, gaming, driver, DLSS, battlefield V, 418.91

NVIDIA's GeForce 418.91 WHQL drivers have brought DLSS support to Battlefield V and both real-time ray tracing and DLSS to the upcoming Metro Exodus, which will be the first game to support the technologies from day one when it is released (now exclusively on Epic's game store) on February 15.

NV_BFV_Screen.PNG

From NVIDIA:

Battlefield V - This stunning World War II combat game, created by EA and DICE, was the first to support real-time ray-traced reflections and has now added support for DLSS — giving a performance boost of up to 40 percent with ray-tracing reflections enabled.

Metro Exodus - The third installment in the haunting Metro franchise, developed by 4A Games and Deep Silver, will support RTX-enabled real-time ray tracing — the first time it has been used in a game for global illumination. At launch, the game will also support DLSS, boosting performance up to 30 percent, as well as a host of other NVIDIA gaming technologies, including HairWorks, PhysX, Ansel and Highlights.

NVIDIA has posted a video showcasing the performance improvement with DLSS vs. real-time ray tracing in BFV, where gains of up to 40% are advertised:

As to Metro Exodus, with the additional ray traced components it would seem the upcoming game will end up being a popular benchmark for the technologies, after we have seem most of the ray tracing and DLSS discussion surround BFV to this point (Port Royal notwithstanding). At some future date Shadow of the Tomb Raider will enter the mix as well, but this is still awaiting ray tracing and DLSS support via a planned update.

For its part Metro is only gaining 30% with DLSS (vs. real-time ray tracing + TAA) according to NVIDIA, which is obviously lower than the boost to BFV. We have seen a preview of real-time ray tracing and DLSS performance in the latest Metro game over at Tom's Hardware, where they look at the performance differences and perceived quality between the two. It's also worth noting that both BFV and Metro Exodus are not fully ray traced games, as Tom's explains:

"Battlefield applies ray tracing to reflections. Metro Exodus uses it for global illumination from the sun/sky, modeling how light interacts with various surfaces. Local light sources are not ray traced, though."

The Battlefield V DLSS update is now rolling out, with some early performance numbers already available. Metro Exodus will be released on February 15, and is the latest title to eschew Steam in favor of Epic's new platform.

Source: NVIDIA

Q2VKPT Makes Quake 2 the First Entirely Raytraced Game

Subject: General Tech | January 18, 2019 - 06:09 PM |
Tagged: vulkan, rtx, raytracing, Quake II, quake, Q2VKPT, Q2PRO, path tracing, open source, nvidia, john carmack, github, fps

Wait - the first fully raytraced game was released in 1997? Not exactly, but Q2VKPT is. That name is not a typo (it stands for Quake 2 Vulkan Path Tracing) it's actually a game - or, more correctly, a proof-of-concept. But not just any game; we're talking about Quake 2. Technically this is a combination of Q2PRO, "an enhanced Quake 2 client and server for Windows and Linux", and VKPT, or Vulkan Path Tracing.

q2vkpt_screenshot_1.jpg

The end result is a fully raytraced experience that, if nothing else, gives the computer hardware media more to run on NVIDIA's GeForce RTX graphics cards right now than the endless BFV demos. Who would have guessed we'd be benchmarking Quake 2 again in 2019?

"Q2VKPT is the first playable game that is entirely raytraced and efficiently simulates fully dynamic lighting in real-time, with the same modern techniques as used in the movie industry (see Disney's practical guide to path tracing). The recent release of GPUs with raytracing capabilities has opened up entirely new possibilities for the future of game graphics, yet making good use of raytracing is non-trivial. While some games have started to explore improvements in shadow and reflection rendering, Q2VKPT is the first project to implement an efficient unified solution for all types of light transport: direct, scattered, and reflected light (see media). This kind of unification has led to a dramatic increase in both flexibility and productivity in the movie industry. The chance to have the same development in games promises a similar increase in visual fidelity and realism for game graphics in the coming years.

This project is meant to serve as a proof-of-concept for computer graphics research and the game industry alike, and to give enthusiasts a glimpse into the potential future of game graphics. Besides the use of hardware-accelerated raytracing, Q2VKPT mainly gains its efficiency from an adaptive image filtering technique that intelligently tracks changes in the scene illumination to re-use as much information as possible from previous computations."

The project can be downloaded from Github, and the developers neatly listed the needed files for download (the .pak files from either the Quake 2 demo or the full version can be used):

  • Github Repository
  • Windows Binary on Github
  • Quake II Starter ("Quake II Starter is a free, standalone Quake II installer for Windows that uses the freely available 3.14 demo, 3.20 point release and the multiplayer-focused Q2PRO client to create a functional setup that's capable of playing online.")

There were also a full Q&A from the developers, and some obvious questions were answered including the observation that Quake 2 is "ancient" at this point, and shouldn't it "run at 6000 FPS by now":

While it is true that Quake II is a relatively old game with rather low geometric complexity, the limiting factor of path tracing is not primarily raytracing or geometric complexity. In fact, the current prototype could trace many more rays without a notable change in frame rate. The computational cost of the techniques used in the Q2VKPT prototype mainly depend on the number of (indirect) light scattering computations and the number of light sources. Quake II was already designed with many light sources when it was first released, in that sense it is still quite a modern game. Also, the number of light scattering events does not depend on scene complexity. It is therefore thinkable that the techniques we use could well scale up to more recent games."

And on the subject of path tracing vs. ray tracing:

"Path tracing is an elegant algorithm that can simulate many of the complex ways that light travels and scatters in virtual scenes. Its physically-based simulation of light allows highly realistic rendering. Path tracing uses Raytracing in order to determine the visibility in-between scattering events. However, Raytracing is merely a primitive operation that can be used for many things. Therefore, Raytracing alone does not automatically produce realistic images. Light transport algorithms like Path tracing can be used for that. However, while elegant and very powerful, naive path tracing is very costly and takes a long time to produce stable images. This project uses a smart adaptive filter that re-uses as much information as possible across many frames and pixels in order to produce robust and stable images."

This project is the result of work by one Christoph Schied, and was "a spare-time project to validate the results of computer graphics research in an actual game". Whatever your opinion of Q2VKPT, as we look back at Quake 2 and its impressive original lighting effects it's pretty clear that John Carmack was far ahead of his time (and it could be said that it's taken this long for hardware to catch up).

Source: Q2VKPT

3DMark Port Royal Ray Tracing Benchmark Launches January 8th

Subject: Graphics Cards | December 10, 2018 - 10:36 AM |
Tagged: 3dmark, ray tracing, directx raytracing, raytracing, rtx, benchmarking, benchmarks

After first announcing it last month, UL this weekend provided new information on its upcoming ray tracing-focused addition to the 3DMark benchmarking suite. Port Royal, what UL calls the "world's first dedicated real-time ray tracing benchmark for gamers," will launch Tuesday, January 8, 2019.

For those eager for a glimpse of the new ray-traced visual spectacle, or for the majority of gamers without a ray tracing-capable GPU, the company has released a video preview of the complete Port Royal demo scene.

Access to the new Port Royal benchmark will be limited to the Advanced and Professional editions of 3DMark. Existing 3DMark users can upgrade to the benchmark for $2.99, and it will become part of the base $29.99 Advanced Edition package for new purchasers starting January 8th.

Real-time ray tracing promises to bring new levels of realism to in-game graphics. Port Royal uses DirectX Raytracing to enhance reflections, shadows, and other effects that are difficult to achieve with traditional rendering techniques.

As well as benchmarking performance, 3DMark Port Royal is a realistic and practical example of what to expect from ray tracing in upcoming games— ray tracing effects running in real-time at reasonable frame rates at 2560 × 1440 resolution.

3DMark Port Royal was developed with input from AMD, Intel, NVIDIA, and other leading technology companies. We worked especially closely with Microsoft to create a first-class implementation of the DirectX Raytracing API.

Port Royal will run on any graphics card with drivers that support DirectX Raytracing. As with any new technology, there are limited options for early adopters, but more cards are expected to get DirectX Raytracing support in 2019.

3DMark can be acquired via Steam or directly from UL's online store. The Advanced Edition, which includes access to all benchmarks, is priced at $29.99.

Manufacturer: Microsoft

O Rayly? Ya Rayly. No Ray!

Microsoft has just announced a raytracing extension to DirectX 12, called DirectX Raytracing (DXR), at the 2018 Game Developer's Conference in San Francisco.

microsoft-2015-directx12-logo.jpg

The goal is not to completely replace rasterization… at least not yet. This effect will be mostly implemented for effects that require supplementary datasets, such as reflections, ambient occlusion, and refraction. Rasterization, the typical way that 3D geometry gets drawn on a 2D display, converts triangle coordinates into screen coordinates, and then a point-in-triangle test runs across every sample. This will likely occur once per AA sample (minus pixels that the triangle can’t possibly cover -- such as a pixel outside of the triangle's bounding box -- but that's just optimization).

microsoft-2018-gdc-directx12raytracing-rasterization.png

For rasterization, each triangle is laid on a 2D grid corresponding to the draw surface.
If any sample is in the triangle, the pixel shader is run.
This example shows the rotated grid MSAA case.

A program, called a pixel shader, is then run with some set of data that the GPU could gather on every valid pixel in the triangle. This set of data typically includes things like world coordinate, screen coordinate, texture coordinates, nearby vertices, and so forth. This lacks a lot of information, especially things that are not visible to the camera. The application is free to provide other sources of data for the shader to crawl… but what?

  • Cubemaps are useful for reflections, but they don’t necessarily match the scene.
  • Voxels are useful for lighting, as seen with NVIDIA’s VXGI and VXAO.

This is where DirectX Raytracing comes in. There’s quite a few components to it, but it’s basically a new pipeline that handles how rays are cast into the environment. After being queued, it starts out with a ray-generation stage, and then, depending on what happens to the ray in the scene, there are close-hit, any-hit, and miss shaders. Ray generation allows the developer to set up how the rays are cast, where they call an HLSL instrinsic instruction, TraceRay (which is a clever way of invoking them, by the way). This function takes an origin and a direction, so you can choose to, for example, cast rays only in the direction of lights if your algorithm was to, for instance, approximate partially occluded soft shadows from a non-point light. (There are better algorithms to do that, but it's just the first example that came off the top of my head.) The close-hit, any-hit, and miss shaders occur at the point where the traced ray ends.

To connect this with current technology, imagine that ray-generation is like a vertex shader in rasterization, where it sets up the triangle to be rasterized, leading to pixel shaders being called.

microsoft-2018-gdc-directx12raytracing-multibounce.png

Even more interesting – the close-hit, any-hit, and miss shaders can call TraceRay themselves, which is used for multi-bounce and other recursive algorithms (see: figure above). The obvious use case might be reflections, which is the headline of the GDC talk, but they want it to be as general as possible, aligning with the evolution of GPUs. Looking at NVIDIA’s VXAO implementation, it also seems like a natural fit for a raytracing algorithm.

Speaking of data structures, Microsoft also detailed what they call the acceleration structure. Each object is composed of two levels. The top level contains per-object metadata, like its transformation and whatever else data that the developer wants to add to it. The bottom level contains the geometry. The briefing states, “essentially vertex and index buffers” so we asked for clarification. DXR requires that triangle geometry be specified as vertex positions in either 32-bit float3 or 16-bit float3 values. There is also a stride property, so developers can tweak data alignment and use their rasterization vertex buffer, as long as it's HLSL float3, either 16-bit or 32-bit.

As for the tools to develop this in…

microsoft-2018-gdc-PIX.png

Microsoft announced PIX back in January 2017. This is a debugging and performance analyzer for 64-bit, DirectX 12 applications. Microsoft will upgrade it to support DXR as soon as the API is released (specifically, “Day 1”). This includes the API calls, the raytracing pipeline resources, the acceleration structure, and so forth. As usual, you can expect Microsoft to support their APIs with quite decent – not perfect, but decent – documentation and tools. They do it well, and they want to make sure it’s available when the API is.

ea-2018-SEED screenshot (002).png

Example of DXR via EA's in-development SEED engine.

In short, raytracing is here, but it’s not taking over rasterization. It doesn’t need to. Microsoft is just giving game developers another, standardized mechanism to gather supplementary data for their games. Several game engines have already announced support for this technology, including the usual suspects of anything top-tier game technology:

  • Frostbite (EA/DICE)
  • SEED (EA)
  • 3DMark (Futuremark)
  • Unreal Engine 4 (Epic Games)
  • Unity Engine (Unity Technologies)

They also said, “and several others we can’t disclose yet”, so this list is not even complete. But, yeah, if you have Frostbite, Unreal Engine, and Unity, then you have a sizeable market as it is. There is always a question about how much each of these engines will support the technology. Currently, raytracing is not portable outside of DirectX 12, because it’s literally being announced today, and each of these engines intend to support more than just Windows 10 and Xbox.

Still, we finally have a standard for raytracing, which should drive vendors to optimize in a specific direction. From there, it's just a matter of someone taking the risk to actually use the technology for a cool work of art.

If you want to read more, check out Ryan's post about the also-announced RTX, NVIDIA's raytracing technology.

MWC 16: Imagination Technologies Ray Tracing Accelerator

Subject: Graphics Cards, Mobile, Shows and Expos | February 23, 2016 - 08:46 PM |
Tagged: raytracing, ray tracing, PowerVR, mwc 16, MWC, Imagination Technologies

For the last couple of years, Imagination Technologies has been pushing hardware-accelerated ray tracing. One of the major problems in computer graphics is knowing what geometry and material corresponds to a specific pixel on the screen. Several methods exists, although typical GPUs crush a 3D scene into the virtual camera's 2D space and do a point-in-triangle test on it. Once they know where in the triangle the pixel is, if it is in the triangle, it can be colored by a pixel shader.

imagtech-2016-PowerVR-GR6500-GPU-PowerVR-Wizard-GPUs.png

Another method is casting light rays into the scene, and assigning a color based on the material that it lands on. This is ray tracing, and it has a few advantages. First, it is much easier to handle reflections, transparency, shadows, and other effects where information is required beyond what the affected geometry and its material provides. There are usually ways around this, without resorting to ray tracing, but they each have their own trade-offs. Second, it can be more efficient for certain data sets. Rasterization, since it's based around a “where in a triangle is this point” algorithm, needs geometry to be made up of polygons.

It also has the appeal of being what the real world sort-of does (assuming we don't need to model Gaussian beams). That doesn't necessarily mean anything, though.

At Mobile World Congress, Imagination Technologies once again showed off their ray tracing hardware, embodied in the PowerVR GR6500 GPU. This graphics processor has dedicated circuitry to calculate rays, and they use it in a couple of different ways. They presented several demos that modified Unity 5 to take advantage of their ray tracing hardware. One particularly interesting one was their quick, seven second video that added ray traced reflections atop an otherwise rasterized scene. It was a little too smooth, creating reflections that were too glossy, but that could probably be downplayed in the material ((Update: Feb 24th @ 5pm Car paint is actually that glossy. It's a different issue). Back when I was working on a GPU-accelerated software renderer, before Mantle, Vulkan, and DirectX 12, I was hoping to use OpenCL-based ray traced highlights on idle GPUs, if I didn't have any other purposes for it. Now though, those can be exposed to graphics APIs directly, so they might not be so idle.

The downside of dedicated ray tracing hardware is that, well, the die area could have been used for something else. Extra shaders, for compute, vertex, and material effects, might be more useful in the real world... or maybe not. Add in the fact that fixed-function circuitry already exists for rasterization, and it makes you balance gain for cost.

It could be cool, but it has its trade-offs, like anything else.

Manufacturer: Scott Michaud

A new generation of Software Rendering Engines.

We have been busy with side projects, here at PC Perspective, over the last year. Ryan has nearly broken his back rating the frames. Ken, along with running the video equipment and "getting an education", developed a hardware switching device for Wirecase and XSplit.

My project, "Perpetual Motion Engine", has been researching and developing a GPU-accelerated software rendering engine. Now, to be clear, this is just in very early development for the moment. The point is not to draw beautiful scenes. Not yet. The point is to show what OpenGL and DirectX does and what limits are removed when you do the math directly.

Errata: BioShock uses a modified Unreal Engine 2.5, not 3.

In the above video:

  • I show the problems with graphics APIs such as DirectX and OpenGL.
  • I talk about what those APIs attempt to solve, finding color values for your monitor.
  • I discuss the advantages of boiling graphics problems down to general mathematics.
  • Finally, I prove the advantages of boiling graphics problems down to general mathematics.

I would recommend watching the video, first, before moving forward with the rest of the editorial. A few parts need to be seen for better understanding.

Click here, after you watch the video, to read more about GPU-accelerated Software Rendering.