Author:
Manufacturer: Stardock

Benchmark Overview

I knew that the move to DirectX 12 was going to be a big shift for the industry. Since the introduction of the AMD Mantle API along with the Hawaii GPU architecture we have been inundated with game developers and hardware vendors talking about the potential benefits of lower level APIs, which give more direct access to GPU hardware and enable more flexible threading for CPUs to game developers and game engines. The results, we were told, would mean that your current hardware would be able to take you further and future games and applications would be able to fundamentally change how they are built to enhance gaming experiences tremendously.

I knew that the reader interest in DX12 was outstripping my expectations when I did a live blog of the official DX12 unveil by Microsoft at GDC. In a format that consisted simply of my text commentary and photos of the slides that were being shown (no video at all), we had more than 25,000 live readers that stayed engaged the whole time. Comments and questions flew into the event – more than me or my staff could possible handle in real time. It turned out that gamers were indeed very much interested in what DirectX 12 might offer them with the release of Windows 10.

game3.jpg

Today we are taking a look at the first real world gaming benchmark that utilized DX12. Back in March I was able to do some early testing with an API-specific test that evaluates the overhead implications of DX12, DX11 and even AMD Mantle from Futuremark and 3DMark. This first look at DX12 was interesting and painted an amazing picture about the potential benefits of the new API from Microsoft, but it wasn’t built on a real game engine. In our Ashes of the Singularity benchmark testing today, we finally get an early look at what a real implementation of DX12 looks like.

And as you might expect, not only are the results interesting, but there is a significant amount of created controversy about what those results actually tell us. AMD has one story, NVIDIA another and Stardock and the Nitrous engine developers, yet another. It’s all incredibly intriguing.

Continue reading our analysis of the Ashes of the Singularity DX12 benchmark!!

Manufacturer: PC Perspective

It's Basically a Function Call for GPUs

Mantle, Vulkan, and DirectX 12 all claim to reduce overhead and provide a staggering increase in “draw calls”. As mentioned in the previous editorial, loading graphics card with tasks will take a drastic change in these new APIs. With DirectX 10 and earlier, applications would assign attributes to (what it is told is) the global state of the graphics card. After everything is configured and bound, one of a few “draw” functions is called, which queues the task in the graphics driver as a “draw call”.

While this suggests that just a single graphics device is to be defined, which we also mentioned in the previous article, it also implies that one thread needs to be the authority. This limitation was known about for a while, and it contributed to the meme that consoles can squeeze all the performance they have, but PCs are “too high level” for that. Microsoft tried to combat this with “Deferred Contexts” in DirectX 11. This feature allows virtual, shadow states to be loaded from secondary threads, which can be appended to the global state, whole. It was a compromise between each thread being able to create its own commands, and the legacy decision to have a single, global state for the GPU.

Some developers experienced gains, while others lost a bit. It didn't live up to expectations.

pcper-2015-dx12-290x.png

The paradigm used to load graphics cards is the problem. It doesn't make sense anymore. A developer might not want to draw a primitive with every poke of the GPU. At times, they might want to shove a workload of simple linear algebra through it, while other requests could simply be pushing memory around to set up a later task (or to read the result of a previous one). More importantly, any thread could want to do this to any graphics device.

pcper-2015-dx12-980.png

The new graphics APIs allow developers to submit their tasks quicker and smarter, and it allows the drivers to schedule compatible tasks better, even simultaneously. In fact, the driver's job has been massively simplified altogether. When we tested 3DMark back in March, two interesting things were revealed:

  • Both AMD and NVIDIA are only a two-digit percentage of draw call performance apart
  • Both AMD and NVIDIA saw an order of magnitude increase in draw calls

Read on to see what this means for games and game development.

Podcast #360 - Intel XPoint Memory, Windows 10 and DX12, FreeSync displays and more!

Subject: General Tech | July 30, 2015 - 02:45 PM |
Tagged: podcast, video, Intel, XPoint, nand, DRAM, windows 10, DirectX 12, freesync, g-sync, amd, nvidia, benq, uhd420, wasabi mango, X99, giveaway

PC Perspective Podcast #360 - 07/30/2015

Join us this week as we discuss Intel XPoint Memory, Windows 10 and DX12, FreeSync displays and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Sebastian Peak

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

 

Manufacturer: PC Perspective

... But Is the Timing Right?

Windows 10 is about to launch and, with it, DirectX 12. Apart from the massive increase in draw calls, Explicit Multiadapter, both Linked and Unlinked, has been the cause of a few pockets of excitement here and there. I am a bit concerned, though. People seem to find this a new, novel concept that gives game developers the tools that they've never had before. It really isn't. Depending on what you want to do with secondary GPUs, game developers could have used them for years. Years!

Before we talk about the cross-platform examples, we should talk about Mantle. It is the closest analog to DirectX 12 and Vulkan that we have. It served as the base specification for Vulkan that the Khronos Group modified with SPIR-V instead of HLSL and so forth. Some claim that it was also the foundation of DirectX 12, which would not surprise me given what I've seen online and in the SDK. Allow me to show you how the API works.

amd-2015-mantle-execution-model.png

Mantle is an interface that mixes Graphics, Compute, and DMA (memory access) into queues of commands. This is easily done in parallel, as each thread can create commands on its own, which is great for multi-core processors. Each queue, which are lists leading to the GPU that commands are placed in, can be handled independently, too. An interesting side-effect is that, since each device uses standard data structures, such as IEEE754 decimal numbers, no-one cares where these queues go as long as the work is done quick enough.

Since each queue is independent, an application can choose to manage many of them. None of these lists really need to know what is happening to any other. As such, they can be pointed to multiple, even wildly different graphics devices. Different model GPUs with different capabilities can work together, as long as they support the core of Mantle.

microsoft-dx12-build15-ue4frame.png

DirectX 12 and Vulkan took this metaphor so their respective developers could use this functionality across vendors. Mantle did not invent the concept, however. What Mantle did is expose this architecture to graphics, which can make use of all the fixed-function hardware that is unique to GPUs. Prior to AMD's usage, this was how GPU compute architectures were designed. Game developers could have spun up an OpenCL workload to process physics, audio, pathfinding, visibility, or even lighting and post-processing effects... on a secondary GPU, even from a completely different vendor.

Vista's multi-GPU bug might get in the way, but it was possible in 7 and, I believe, XP too.

Read on to see a couple reasons why we are only getting this now...

Computex 2015: EVGA Builds PrecisionX 16 with DirectX 12 Support

Subject: Graphics Cards | June 1, 2015 - 10:58 AM |
Tagged: evga, precisionx, dx12, DirectX 12

Another interesting bit of news surrounding Computex and the new GTX 980 Ti comes from EVGA and its PrecisionX software. This is easily our favorite tool for overclocking and GPU monitoring, so it's great to see the company continuing to push forward with features and capability. EVGA is the first to add full support for DX12 with an overlay.

precisionx16.jpg

What does that mean? It means as DX12 applications that find their way out to consumers and media, we will now have a tool that can help measure performance and monitor GPU speeds and feeds via the PrecisionX overlay. Before this release, we were running the dark with DX12 demos, so this is great news!

You can download the latest version over on EVGA's website!

Podcast #348 - DirectX 12, New AMD GPU News, Giveaways and more!

Subject: General Tech | May 7, 2015 - 03:17 PM |
Tagged: podcast, video, amd, Fiji, hbm, microsoft, build 2015, DirectX 12, Intel, SSD 750, freesync, gsync, Oculus, rift

PC Perspective Podcast #348 - 05/07/2015

Join us this week as we discuss DirectX 12, New AMD GPU News, Giveaways and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Manufacturer: Microsoft

DirectX 12 Has No More Secrets

The DirectX 12 API is finalized and the last of its features are known. Before the BUILD conference, the list consisted of Conservative Rasterization, Rasterizer Ordered Viewed, Typed UAV Load, Volume Tiled Resources, and a new Tiled Resources revision for non-volumetric content. When the GeForce GTX 980 launched, NVIDIA claimed it would be compatible with DirectX 12 features. Enthusiasts were skeptical, because Microsoft did not officially finalize the spec at the time.

Last week, Microsoft announced the last feature of the graphics API: Multiadapter.

We already knew that Multiadapter existed, at least to some extent. It is the part of the specification that allows developers to address multiple graphics adapters to split tasks between them. In DirectX 11 and earlier, secondary GPUs would remain idle unless the graphics driver sprinkled some magic fair dust on it with SLI, CrossFire, or Hybrid CrossFire. The only other way to access this dormant hardware was by spinning up an OpenCL (or similar compute API) context on the side.

Read on to see what DirectX 12 does differently...

Square Enix Announces Deus Ex: Mankind Divided

Subject: General Tech | April 10, 2015 - 07:00 AM |
Tagged: tressfx, square enix, eidos montreal, dx12, DirectX 12, deus ex: mankind divided, deus ex

Deus Ex: Human Revolution came out in 2011 as a prequel to Ion Storm's Deus Ex and Deus Ex: Invisible War. Human Revolution was made after Warren Spector left the company and Eidos closed down the Austin, Texas developer, leaving the franchise to Eidos Montreal. By the time of Human Revolution's release, Eidos was already purchased by the Japanese publisher, Square Enix. Deus Ex was set in 2052 and Invisible War was set in 2072. Human Revolution, being a prequel as mentioned earlier, rewound the clock to 2027 and introduced a new main character, Adam Jensen. It explored the rise of machine-human augmentations that formed much of the lore in the original titles.

square-eidos-deus-ex-mankind-glow.jpg

Timeline and theme established, Square Enix has just announced Deus Ex: Mankind Divided, the sequel to the prequel with a great looking (albeit a little bloody) trailer. It is set in 2029, which is just two years after events of Human Revolution. It will be coming to the PC, as well as the two most-next-gen consoles. As expected, Adam Jensen returns as the main character. Now that Square Enix and its subsidiary, Eidos, spent so much to build him up as a brand, it makes sense that they would continue with the consumer recognition. Makes sense from a business perspective, although it probably means the franchise will meander less through time. I will leave that up to the reader to decide whether that's good or bad.

AMD Gaming has also tweeted out that Mankind Divided, or its PC version at the very least, will utilize both DirectX 12 and TressFX. I am curious whether TressFX has been updated to take advantage of the new API, given how important GPU compute is to the new graphics standards. No release date has been set.

Source: Square Enix

GDC 15: Intel shows 3DMark API Overhead Test at Work

Subject: Graphics Cards, Processors | March 4, 2015 - 08:46 PM |
Tagged: GDC, gdc 15, API, dx12, DirectX 12, dx11, Mantle, 3dmark, Futuremark

It's probably not a surprise to most that Futuremark is working on a new version of 3DMark around the release of DirectX 12. What might be new for you is that this version will include an API overhead test, used to evaluate a hardware configuration's ability to affect performance in Mantle, DX11 and DX12 APIs.

3dmark1.jpg

While we don't have any results quite yet (those are pending and should be very soon), Intel was showing the feature test running at an event at GDC tonight. In what looks like a simple cityscape being rendered over and over, the goal is to see how many draw calls, or how fast the CPU can react to a game engine, the API and hardware can be.

The test was being showcased on an Intel-powered notebook using a 5th Generation Core processor, code named Broadwell. Obviously this points to the upcoming support for DX12 (though obviously not Mantle) that Intel's integrated GPUs will provide.

It should be very interesting to see how much of an advantage DX12 offers over DX11, even on Intel's wide ranges of consumer and enthusiast processors.

Ubisoft Discusses Assassin's Creed: Unity with Investors

Subject: General Tech, Graphics Cards | February 15, 2015 - 07:30 AM |
Tagged: ubisoft, DirectX 12, directx 11, assassins creed, assassin's creed, assasins creed unity

During a conference call with investors, analysts, and press, Yves Guillemot, CEO of Ubisoft, highlighted the issues with Assassin's Creed: Unity with an emphasis on the positive outcomes going forward. Their quarter itself was good, beating expectations and allowing them to raise full-year projections. As expected, they announced that a new Assassin's Creed game would be released at the end of the year based on the technology they created for Unity, with “lessons learned”.

ubisoft-assassins-creed-unity-scene.jpg

Before optimization, every material on every object is at least one draw call.

Of course, there are many ways to optimize... but that effort works against future titles.

After their speech, the question period revisited the topic of Assassin's Creed: Unity and how it affected current sales, how it would affect the franchise going forward, and how should they respond to that foresight (Audio Recording - The question starts at 25:20). Yves responded that they redid “100% of the engine”, which was a tremendous undertaking. “When you do that, it's painful for all the group, and everything has to be recalibrated.” He continues: “[...] but the engine has been created, and it is going to help that brand to shine in the future. It's steps that we need to take regularly so that we can constantly innovated. Those steps are sometimes painful, but they allow us to improve the overall quality of the brand, so we think this will help the brand in the long term.”

This makes a lot of sense to me. When the issues first arose, it was speculated that the engine was pushing way too many draw calls, especially for DirectX 11 PCs. At the time, I figured that Ubisoft chose Assassin's Creed: Unity to be the first title to use their new development pipeline, focused on many simple assets rather than batching things together to minimize host-to-GPU and GPU-to-host interactions. Tens of thousands of individual tasks being sent to the GPU will choke a PC, and getting it to run at all on DirectX 11 might have diverted resources from, or even caused, many of the glitches. Currently, a few thousand is ideal although “amazing developers” can raise the ceiling to about ten thousand.

This also means that I expect the next Assassin's Creed title to support DirectX 12, possibly even in the graphics API's launch window. If I am correct, Ubisoft has been preparing for it for a long time. Of course, it is possible that I am simply wrong, but it would align with Microsoft's Holiday 2015 expectation for the first, big-budget titles to use the new interface and it would be silly to have done their big overhaul without planning on switching to DX12 ASAP.

Then there is the last concern: If I am correct, what should Ubisoft have done? Is it right for them to charge full price for a title that they know will have necessary birth pains? Do they delay it and risk (or even accept) that it will be non-profitable, and upset fans that way? There does not seem to be a clear answer, with all outcomes being some flavor of damage control.

Source: GamaSutra