3DMark "Port Royal" DLSS Update Released

Subject: Graphics Cards | February 5, 2019 - 11:42 PM |
Tagged: rtx, nvidia, Futuremark, DLSS, 3dmark

If you have an RTX-based graphics card, then you can now enable Deep Learning Super Sampling (DLSS) on 3DMark’s Port Royal benchmark. NVIDIA has also published a video of the benchmark running at 1440p alongside Temporal Anti-Aliasing (TAA).

Two things stand out about the video: Quality and Performance.

On the quality side: holy crap it looks good. One of the major issues with TAA is that it makes everything that’s moving somewhat blurry and/or otherwise messed up. For DLSS? It’s very clear and sharp, even in motion. It is very impressive. It also seems to behave well when there are big gaps in rendered light intensity, which, in my experience, can be a problem for antialiasing.

On the performance side, DLSS was shown to be significantly faster than TAA – seemingly larger than the gap between TAA and no anti-aliasing at all. The gap is because DLSS renders at a lower resolution automatically, and this behavior is published on NVIDIA’s website. (Ctrl+F for “to reduce the game’s internal rendering resolution”.)

Update on Feb 6th @ 12:36pm EST:

Apparently there's another mode, called DLSS 2X, that renders at native resolution. It won't have the performance boost over TAA, but it should have slightly higher rendering quality. I'm guessing it will be especially noticeable in the following situation.

End of Update.

While NVIDIA claims that it shouldn’t cause a noticeable image degradation, I believe I can see an example (in the video and their official screenshots) where the reduced resolution causes artifacts. If you look at the smoothly curving surfaces on the ring under the ship (as the camera zooms in just after 59s) you might be able to see a little horizontal jagged or almost Moiré effect. While I’m not 100% sure that it’s caused by the forced dip in resolution, it doesn’t seem to appear on the TAA version. If this is an artifact with the lowered resolution, I’m curious whether NVIDIA will allow us to run at the native resolution and still perform DLSS, or if the algorithm simply doesn’t operate that way.

nvidia-2019-dlss-01.png

NVIDIA's Side-by-Side Sample with TAA

nvidia-2019-dlss-02.png

NVIDIA's Side-by-Side Sample with DLSS

nvidia-2019-dlss-03.png

DLSS with artifacts pointed out

Image Credit: NVIDIA and FutureMark. Source.

That said, the image quality of DLSS is significantly above TAA. It’s painful watching an object move smoothly on a deferred rendering setup and seeing TAA freak out just a little to where it’s noticeable… but not enough to justify going back to a forward-rendering system with MSAA.

Source: NVIDIA

PC Perspective Podcast #529 - HyperX Cloud MIX, G-SYNC Compatible Monitors

Subject: General Tech | January 17, 2019 - 07:02 AM |
Tagged: video, Threadripper, podcast, Optane, micron, Intel, hyperx, g-sync compatibility, g-sync, freesync, cortana, 3dmark

PC Perspective Podcast #529 - 1/16/2019

This week on the show, we look at a review of a new wireless gaming headset from HyperX, talk about the new G-SYNC Compatibility program for FreeSync monitors, look at ray tracing performance in the new 3DMark Port Royal benchmark, and more!

Subscribe to the PC Perspective Podcast

Check out previous podcast episodes: http://pcper.com/podcast

Show Topics
01:34 - Review: HyperX Cloud MIX
05:19 - News: G-SYNC Compatible Monitor Driver
13:38 - News: Threadripper NUMA Dissociater
15:47 - News: HardOCP Interview with AMD's Scott Herkelman
21:35 - News: Intel-Micron 3D XPoint Split
24:34 - News: Cortana & Windows 10 Search
29:38 - News: 3DMark Port Royal Ray Tracing Benchmark
35:53 - Picks of the Week
46:24 - Outro

Sponsor: This week's episode is brought to you by Casper. Save $50 on select mattresses by visiting http://www.casper.com/pcper and using promo code pcper at checkout.

Picks of the Week
Jim: iPhone XS Max Battery Case
Jeremy: 3D-Printed Resistor Storage
Josh: ASRock X470 Taichi Motherboard
Sebastian: Koss KPH30ik Headphones

Today's Podcast Hosts
Sebastian Peak
Josh Walrath
Jeremy Hellstrom
Jim Tanous

Slow light, testing ray tracing performance with Port Royal

Subject: Graphics Cards | January 14, 2019 - 02:23 PM |
Tagged: 3dmark, port royal, ray tracing

3D Mark recently released an inexpensive update to their benchmarking suite to let you test your ray tracing performance; you can grab Port Royal for a few dollars from Steam.  As there has been limited time to use the benchmark as well as a small sample of GPUs which can properly run it, it has not yet made it into most benchmarking suites.  Bjorn3D took the time to install it on a decent system and tested the performance of the Titan and the five RTX cards available on the market. 

As you can see, it is quite the punishing test, not even NVIDIA's flagship card can maintain 60fps.

3DMark-Port-Royal-screenshot-3-1024x576.jpg

"3DMark is finally updated with its newest benchmark designed specifically to test real time ray tracing performance. The benchmark we are looking at today is Port Royal, it is the first really good repeatable benchmark I have seen available that tests new real time ray tracing features."

Here are some more Graphics Card articles from around the web:

Graphics Cards

 

Source: Bjorn3D

3DMark Port Royal Ray Tracing Benchmark Launches January 8th

Subject: Graphics Cards | December 10, 2018 - 10:36 AM |
Tagged: 3dmark, ray tracing, directx raytracing, raytracing, rtx, benchmarking, benchmarks

After first announcing it last month, UL this weekend provided new information on its upcoming ray tracing-focused addition to the 3DMark benchmarking suite. Port Royal, what UL calls the "world's first dedicated real-time ray tracing benchmark for gamers," will launch Tuesday, January 8, 2019.

For those eager for a glimpse of the new ray-traced visual spectacle, or for the majority of gamers without a ray tracing-capable GPU, the company has released a video preview of the complete Port Royal demo scene.

Access to the new Port Royal benchmark will be limited to the Advanced and Professional editions of 3DMark. Existing 3DMark users can upgrade to the benchmark for $2.99, and it will become part of the base $29.99 Advanced Edition package for new purchasers starting January 8th.

Real-time ray tracing promises to bring new levels of realism to in-game graphics. Port Royal uses DirectX Raytracing to enhance reflections, shadows, and other effects that are difficult to achieve with traditional rendering techniques.

As well as benchmarking performance, 3DMark Port Royal is a realistic and practical example of what to expect from ray tracing in upcoming games— ray tracing effects running in real-time at reasonable frame rates at 2560 × 1440 resolution.

3DMark Port Royal was developed with input from AMD, Intel, NVIDIA, and other leading technology companies. We worked especially closely with Microsoft to create a first-class implementation of the DirectX Raytracing API.

Port Royal will run on any graphics card with drivers that support DirectX Raytracing. As with any new technology, there are limited options for early adopters, but more cards are expected to get DirectX Raytracing support in 2019.

3DMark can be acquired via Steam or directly from UL's online store. The Advanced Edition, which includes access to all benchmarks, is priced at $29.99.

Intel Core i9-7980XE Pushed to 6.1 GHz On All Cores Using Liquid Nitrogen

Subject: Processors | September 25, 2017 - 09:36 PM |
Tagged: skylake-x, overclocking, Intel Skylake-X, Intel, Cinebench, 7980xe, 3dmark, 14nm

Renowned overclocker der8auer got his hands on the new 18-core Intel Core i9-7980XE and managed to break a few records with more than a bit of LN2 and thermal paste. Following a delid, der8auer slathered the bare die and surrounding PCB with a polymer-based (Kryonaut) TIM and reattached the HIS to prepare for the extreme overclock. He even attempted to mill out the middle of the IHS to achieve a balance between direct die cooling and using the IHS to prevent bending the PCB and spread out the pressure from the LN2 cooler block, but ran into inconsistent results between runs and opted not to proceed with that method.

Core i9-7980xe LN2 overclock.png

Using an Asus Rampage VI Apex X299 motherboard and the Core i9-7980XE at an Asus ROG event in Taiwan der8auer used liquid nitrogen to push all eighteen cores (plus Hyper-Threading) to 6.1 GHz for a CPU-Z validation. To get those clockspeeds he needed to crank up the voltage to 1.55V (1.8V VCCIN) which is a lot for the 14nm Skylake X processor. Der8auer noted that overclocking was temperature limited beyond this point as at 6.1 GHz he was seeing positive temperatures on the CPU cores despite the surface of the LN2 block being as low as -100 °C! Perhaps even more incredible is the power draw of the processor as it runs at these clockspeeds with the system drawing as much as 1,000 watts (~83 amps) on the +12V rail with the CPU being responsible for almost all of that number! That is a lot of power running through the motherboard VRMs and the on-processor FIVR!

For comparison, at 5.5 GHz he measured 70 amps on the +12V rail (840W) with the chip using 1.45V vcore under load.

7980xe CPU-Z overclock 6GHz.png

For Cinebench R15, the extreme overclocker opted for a tamer 5.7 GHz where the i9-7980XE achieved a multithreaded score of 5,635 points. He compared that to his AMD Threadripper overclock of 5.4 GHz where he achieved a Cinebench score of 4,514 (granted the Intel part was using four more threads and clocked higher).

To push things (especially his power supply heh) further, the overclocker added a LN2 cooled NVIDIA Titan Xp to the mix and managed to overclock the graphics card to 2455 MHz at 1.4V. With the 3840 Pascal cores at 2.455 GHz he managed to break three single card world records by scoring 45,705 in 3DMark 11, 35,782 in 3DMark Fire Strike, and 120,425 in 3DMark Vantage!

Der8auer also made a couple interesting statements regarding overclocking at these levels including the issues of cold bugs not allowing the CPU and/or GPU to boot up if the cooler plate is too cold. On the other side of things, once the chip is running the power consumption can jump drastically with more voltage and higher clocks such that even LN2 can’t maintain sub-zero core temperatures! The massive temperature delta can also create condensation issues that need to be dealt with. He mentions that while for 24/7 overclocking liquid metal TIMs are popular choices, when extreme overclocking the alloy actually works against them because the sub-zero temperatures reduce the effectiveness and thermal conductivity of the interface material which is why polymer-based TIMs are used when cooling with liquid nitrogen, liquid helium, or TECs. Also, while most people apply a thin layer of thermal paste to the direct die or HIS, when extreme overclocking he “drowns” the processor die and PCB in the TIM to get as much contact as possible with the cooler as every bit of heat transfer helps even the small amount he can transfer through the PCB. Further, FIVR has advantages such as per-core voltage fine tuning, but it also can hold back further overclocking from cold bugs that will see the processor shut down past -100 to -110 °C temperature limiting overclocks whereas with an external VRM setup they could possibly push the processor further.

For the full scoop, check out his overclocking video. Interesting stuff!

Also read:

Source: der8auer

Futuremark Adds Vulkan, Removes Mantle from 3DMark

Subject: Graphics Cards | March 28, 2017 - 04:32 PM |
Tagged: vulkan, DirectX 12, Futuremark, 3dmark

The latest update to 3DMark adds Vulkan support to its API Overhead test, which attempts to render as many simple objects as possible while keeping above 30 FPS. This branch of game performance allows developers to add more objects into a scene, and design these art assets in a more simple, straight-forward way. This is, now, one of the first tests that can directly compare DirectX 12 and Vulkan, which we expect to be roughly equivalent, but we couldn’t tell for sure.

While I wasn’t able to run the tests myself, Luca Rocchi of Ocaholic gave it a shot on their Core i7-5820K and GTX 980. Apparently, Vulkan was just under 10% faster than DirectX 12 in their results, reaching 22.6 million draw calls in Vulkan, but 20.6 million in DirectX 12. Again, this is one test, done by a third-party, for a single system, and a single GPU driver, on a single 3D engine, and one that is designed to stress a specific portion of the API at that; take it with a grain of salt. Still, this suggests that Vulkan can keep pace with the slightly-older DirectX 12 API, and maybe even beat it.

This update also removed Mantle support. I just thought I’d mention that.

Source: Futuremark

Podcast #409 - GTX 1060 Review, 3DMark Time Spy Controversy, Tiny Nintendo and more!

Subject: General Tech | July 21, 2016 - 12:21 PM |
Tagged: Wraith, Volta, video, time spy, softbank, riotoro, retroarch, podcast, nvidia, new, kaby lake, Intel, gtx 1060, geforce, asynchronous compute, async compute, arm, apollo lake, amd, 3dmark, 10nm, 1070m, 1060m

PC Perspective Podcast #409 - 07/21/2016

Join us this week as we discuss the GTX 1060 review, controversy surrounding the async compute of 3DMark Time Spy and more!!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

This episode of the PC Perspective Podcast is sponsored by Casper!

Hosts:  Ryan Shrout, Allyn Malventano, Jeremy Hellstrom, and Josh Walrath

Program length: 1:34:57
  1. Week in Review:
  2. 0:51:17 This episode of the PC Perspective Podcast is sponsored by Casper!
  3. News items of interest:
  4. 1:26:26 Hardware/Software Picks of the Week
    1. Ryan: Sapphire Nitro Bot
    2. Allyn: klocki - chill puzzle game (also on iOS / Android)
  5. Closing/outro

Manufacturer: Overclock.net

Yes, We're Writing About a Forum Post

Update - July 19th @ 7:15pm EDT: Well that was fast. Futuremark published their statement today. I haven't read it through yet, but there's no reason to wait to link it until I do.

Update 2 - July 20th @ 6:50pm EDT: We interviewed Jani Joki, Futuremark's Director of Engineering, on our YouTube page. The interview is embed just below this update.

Original post below

The comments of a previous post notified us of an Overclock.net thread, whose author claims that 3DMark's implementation of asynchronous compute is designed to show NVIDIA in the best possible light. At the end of the linked post, they note that asynchronous compute is a general blanket, and that we should better understand what is actually going on.

amd-mantle-queues.jpg

So, before we address the controversy, let's actually explain what asynchronous compute is. The main problem is that it actually is a broad term. Asynchronous compute could describe any optimization that allows tasks to execute when it is most convenient, rather than just blindly doing them in a row.

I will use JavaScript as a metaphor. In this language, you can assign tasks to be executed asynchronously by passing functions as parameters. This allows events to execute code when it is convenient. JavaScript, however, is still only single threaded (without Web Workers and newer technologies). It cannot run callbacks from multiple events simultaneously, even if you have an available core on your CPU. What it does, however, is allow the browser to manage its time better. Many events can be delayed until the browser renders the page, it performs other high-priority tasks, or until the asynchronous code has everything it needs, like assets that are loaded from the internet.

mozilla-architecture.jpg

This is asynchronous computing.

However, if JavaScript was designed differently, it would have been possible to run callbacks on any available thread, not just the main thread when available. Again, JavaScript is not designed in this way, but this is where I pull the analogy back into AMD's Asynchronous Compute Engines. In an ideal situation, a graphics driver will be able to see all the functionality that a task will require, and shove them down an at-work GPU, provided the specific resources that this task requires are not fully utilized by the existing work.

Read on to see how this is being implemented, and what the controversy is.

3DMark Sale and a brand new DX12 benchmark, Time Spy

Subject: General Tech | June 23, 2016 - 06:37 PM |
Tagged: Futuremark. Time Spy, 3dmark

A new version of DirectX hitting the market means we need a new benchmark and once again Futuremark has delivered, with the Time Spy benchmark.  Right now 3D Mark is 80% off on Steam and if you pick it up you will get access to the new Time Spy Basic benchmark when it is released.

Time Spy uses the new DirectX 12 API and supports new features like asynchronous compute, explicit multi-adapter, and multi-threading.  It will have reviewers digging out hardware they thought they had already tested to provide you with new benchmark data points that will apply to the currently available DX12 games as well as those which will be released.

3DMark-Time-Spy-screenshot-1.jpg

This is also a great opportunity to pick up the full version of the benchmark for your own usage, even if you have yet to upgrade to DX12 hardware.  You should check out the teaser trailer if you are familiar with past 3D Mark versions as you will see a few glimpses at benchmark screens that caused you mental raster burn in years past.

3DMark-Time-Spy-screenshot-2.jpg

 

 

Source: Futuremark