Subject: General Tech | July 13, 2016 - 02:23 PM | Jeremy Hellstrom
Tagged: gaming, dx12, civilization VI, asynchronous compute, amd
AMD, 2K and Firaxis Games have been working together to bring the newest DX12 features to Civilization IV and today they have announced their success. The new game will incorporate Asynchronous Compute in the engine as well as support for Explicit Multi-Adapter for those with multiple GPUs. This should give AMD cards a significant performance boost when running the game, at least until NVIDIA can catch up with their support for the new technologies ... HairWorks is not going to have much as effect on your units as Async Compute will.
"Complete with support for advanced DirectX 12 features like asynchronous compute and explicit multi-adapter, PC gamers the world over will be treated to a high-performance and highly-parallelized game engine perfectly suited to sprawling, complex civilizations."
Here is some more Tech News from around the web:
- Battlefield 1’s New Medic Class Will Revive & Retaliate @ Rock, Paper, SHOTGUN
- Rise of the Tomb Raider gets improved DX12 multi-GPU support @ HEXUS
- Wot I Think: INSIDE @ Rock, Paper, SHOTGUN
- Support Sniper Ana Will be Overwatch's First New Hero @ GiantBomb
- Sega Acquire Amplitude, Will Publish Endless Space 2 @ Rock, Paper, SHOTGUN
- hostbusters (2016) @ Polygon
- Long War Studios Release New XCOM 2 Mods @ Rock, Paper, SHOTGUN
Subject: Processors | June 27, 2016 - 02:40 PM | Jeremy Hellstrom
Tagged: dx12, 6700k, Intel, i7-6950X
[H]ard|OCP has been conducting tests using a variety of CPUs to see how well DX12 distributes load between cores as compared to DX11. Their final article which covers the 6700K and 6950X was done a little differently and so cannot be directly compared to the previously tested CPUs. That does not lower the value of the testing, scaling is still very obvious and the new tests were designed to highlight more common usage scenarios for gamers. Read on to see how well, or how poorly, Ashes of the Singularity scales when using DX12.
"This is our fourth and last installment of looking at the new DX12 API and how it works with a game such as Ashes of the Singularity. We have looked at how DX12 is better at distributing workloads across multiple CPU cores than DX11 in AotS when not GPU bound. This time we compare the latest Intel processors in GPU bound workloads."
Here are some more Processor articles from around the web:
Subject: Graphics Cards | April 29, 2016 - 07:09 PM | Jeremy Hellstrom
Tagged: amd, dx12, async shaders
Earlier in the month [H]ard|OCP investigated the performance scaling that Intel processors display in DX12, now they have finished their tests on AMD processors. These tests include Async computing information, so be warned before venturing forth into the comments. [H] tested an FX 8370 at 2GHz and 4.3GHz to see what effect this had on the games, the 3GHz tests did not add any value and were dropped in favour of these two turbo frequencies. There are some rather interesting results and discussion, drop by for the details.
"One thing that has been on our minds about the new DX12 API is its ability to distribute workloads better on the CPU side. Now that we finally have a couple of new DX12 games that have been released to test, we spend a bit of time getting to bottom of what DX12 might be able to do for you. And a couple sentences on Async Compute."
Here are some more Graphics Card articles from around the web:
Subject: General Tech | April 20, 2016 - 03:14 PM | Jeremy Hellstrom
Tagged: hitman 2016, gaming, dx12, asynchronous compute, ashes of the singularity
DX12 is very new and with these two games utilizing it, Hitman 2016 and Ashes of the Singularity, it is difficult to get a good sample of results to see exactly what the new API will offer. [H]ard|OCP have been working with both of these games to determine the performance differences between DX11 and DX12 and to find where the bottlenecks, if any, are. With Ashes they tried limiting the CPU, one set of tests at 1.2GHz and the second at 4.5GHz which showed how well DX12 lived up to the touted benefits of reduced CPU usage. They also tested with older GPUs on a 4.5GHz CPU to see if the new API does indeed help out older GPUs. They also delve somewhat into the confusion surrounding AMD's Asynchronous ace in the hole.
For Hitman they contrasted various GPUs from both AMD and NVIDIA while leaving the CPU alone for the testing. This review emphasizes the performance delta between DX11 and DX12 on the same GPUs, and unfortunately also addresses some stability issues which DX12 has brought with it. Read through the review to see what results they gathered so far but do not consider this the final word since both NVIDIA and AMD's GPUs could barely manage 10 minutes of DX12 gaming before completely locking up.
We still have a lot more investigation to perform before we can define the strengths and weaknesses of DX12.
"Hitman (2016) supports the new DirectX 12 API. We will take this game and find out if DX12 is faster than DX11 and what it may offer in this game and if it allows a better gameplay experience. We will also compare Himan performance between several video cards to find what is playable and how AMD vs. NV GPUs compare."
Here is some more Tech News from around the web:
- Battlezone 98 Redux Brings Back FPS-RTS Fun @ Rock, Paper, SHOTGUN
- Total Warcraft: Hammers Offers Zone News, I Think @ Rock, Paper, SHOTGUN
- Ashes of the Singularity Review @ OCC
- XCOM Beyond Earth: Shock Tactics @ Rock, Paper, SHOTGUN
- Mafia III will be released on 7th October on PC, PS4 and Xbox One @ HEXUS
- Ratchet & Clank @ Polygon
- Vroomshakalaka! Rocket League Hoops Mode Next Week @ Rock, Paper, SHOTGUN
- Quick Look: Banner Saga 2 @ Giant Bomb
Subject: Graphics Cards | April 14, 2016 - 06:17 PM | Scott Michaud
Tagged: microsoft, windows 10, uwp, DirectX 12, dx12
At the PC Gaming Conference from last year's E3 Expo, Microsoft announced that they were looking to bring more first-party titles to Windows. They used to be one of the better PC gaming publishers, back in the Mechwarrior 4 and earlier Flight Simulator days, but they got distracted as Xbox 360 rose and Windows Vista fell.
Again, part of that is because they attempted to push users to Windows Vista and Games for Windows Live, holding back troubled titles like Halo 2: Vista and technologies like DirectX 10 from Windows XP, which drove users to Valve's then-small Steam platform. Epic Games was also a canary in the coalmine at that time, warning users that Microsoft was considering certification for Games for Windows Live, which threatened mod support “because Microsoft's afraid of what you might put into it”.
It's sometimes easy to conform history to fit a specific viewpoint, but it does sound... familiar.
Anyway, we're glad that Microsoft is bringing first-party content to the PC, and they are perfectly within their rights to structure it however they please. We are also within our rights to point out its flaws and ask for them to be corrected. Turns out that Quantum Break, like Gears of War before it, has some severe performance issues. Let's be clear, these will likely be fixed, and I'm glad that Microsoft didn't artificially delay the PC version to give the console an exclusive window. Also, had they delayed the PC version until it was fixed, we wouldn't have known whether it needed the time.
Still, the game apparently has issues with a 50 FPS top-end cap, on top of pacing-based stutters. One concern that I have is, because DigitalFoundry is a European publication, perhaps the 50Hz issue might be caused by their port being based on a PAL version of the game??? Despite suggesting it, I would be shocked if that were the case, but I'm just trying to figure out why anyone would create a ceiling at that specific interval. They are also seeing NVIDIA's graphics drivers frequently crash, which probably means that some areas of their DirectX 12 support are not quite what the game expects. Again, that is solvable by drivers.
It's been a shaky start for both DirectX 12 and the Windows 10 UWP platform. We'll need to keep waiting and see what happens going forward. I hope this doesn't discourage Microsoft too much, but also that they robustly fix the problems we're discussing.
Subject: General Tech | April 6, 2016 - 01:20 PM | Jeremy Hellstrom
Tagged: gaming, ashes of the singularity, dx12
Ashes of the Singularity comes with a canned benchmark which makes it easier to compare the performance delta between DX11 and DX12, though actual gameplay may differ in performance it does make things much easier. [H]ard|OCP set the graphics to Crazy and tried out the two top cards from NVIDIA and AMD in both APIs and found some very interesting results. The AMD cards performed well above expectation, the Fury X happily sitting at the top of the pack but the 390X was more impressive, matching the performance of the 980 Ti. The AMD cards also increased in performed when running underDX12 compared to DX11, a feat the NVIDIA cards were not able to replicate.
It is still early days for the new DirectX and we should expect to see performance changes as drivers and game engines are refined but for now if you are looking to play this new RTS AMD is the way to go. Check out the full performance details as well as VRAM usage in [H]'s full review.
"The new Ashes of the Singularity game has finally been released on the PC. This game supports DX11 and the new DX12 API with advanced features. In this Day 1 Benchmark Preview we will run a few cards through the in-game canned benchmark comparing DX11 versus DX12 performance and NVIDIA versus AMD performance."
Here is some more Tech News from around the web:
Subject: General Tech | March 30, 2016 - 04:58 PM | Jeremy Hellstrom
Tagged: dx12, rise of the tomb raider, gaming
DX12 API support has arrived for the new Lara Croft game, for those with the hardware and software to support it. For AMD users that means Fury or 300 family cards, which offer DX12.0 support and for NVIDIA, 980 Ti and all other Maxwell GPUs which offers 12.1 as well as 12.0. The difference in support is likely because of the game, not the hardware in this case. [H]ard|OCP takes a look at how well the cards perform in both DX11 and DX12 and as it turns out the warning below is very accurate and perhaps you should wait for a few more driver updates and game patches before switching over to DX12.
"Rise of the Tomb Raider has recently received a new patch which adds DX12 API support, in addition the patch adds NVIDIA VXAO Ambient Occlusion technology, however just under DX11. In this evaluation we will find out if DX12 is beneficial to the gameplay experience currently and how it impacts certain GPUs."
Here is some more Tech News from around the web:
- Hitman PC game analysis @ Kitguru
- EVE Valkyrie Blasts Off With Launch Trailer @ Rock, Paper, SHOTGUN
- Far Cry Primal Review @ OCC
- Call of Duty franchise heads for space @ The Inquirer
- Fallout 4 Survival Mode Beta Now Available @ Rock, Paper, SHOTGUN
- Grand Theft Auto 6 is in production, likely to be set in US @ HEXUS
- The Best ARK: Survival Evolved Mods @ Rock, Paper, SHOTGUN
- Mass Effect Andromeda Details Slip Out @ Rock, Paper, SHOTGUN
Subject: Graphics Cards | March 15, 2016 - 02:02 AM | Ryan Shrout
Tagged: vulkan, raja koduri, Polaris, HBM2, hbm, dx12, crossfire, amd
After hosting the AMD Capsaicin event at GDC tonight, the SVP and Chief Architect of the Radeon Technologies Group Raja Koduri sat down with me to talk about the event and offered up some additional details on the Radeon Pro Duo, upcoming Polaris GPUs and more. The video below has the full interview but there are several highlights that stand out as noteworthy.
- Raja claimed that one of the reasons to launch the dual-Fiji card as the Radeon Pro Duo for developers rather than pure Radeon, aimed at gamers, was to “get past CrossFire.” He believes we are at an inflection point with APIs. Where previously you would abstract two GPUs to appear as a single to the game engine, with DX12 and Vulkan the problem is more complex than that as we have seen in testing with early titles like Ashes of the Singularity.
But with the dual-Fiji product mostly developed and prepared, AMD was able to find a market between the enthusiast and the creator to target, and thus the Radeon Pro branding was born.
Raja further expands on it, telling me that in order to make multi-GPU useful and productive for the next generation of APIs, getting multi-GPU hardware solutions in the hands of developers is crucial. He admitted that CrossFire in the past has had performance scaling concerns and compatibility issues, and that getting multi-GPU correct from the ground floor here is crucial.
- With changes in Moore’s Law and the realities of process technology and processor construction, multi-GPU is going to be more important for the entire product stack, not just the extreme enthusiast crowd. Why? Because realities are dictating that GPU vendors build smaller, more power efficient GPUs, and to scale performance overall, multi-GPU solutions need to be efficient and plentiful. The “economics of the smaller die” are much better for AMD (and we assume NVIDIA) and by 2017-2019, this is the reality and will be how graphics performance will scale.
Getting the software ecosystem going now is going to be crucial to ease into that standard.
- The naming scheme of Polaris (10, 11…) has no equation, it’s just “a sequence of numbers” and we should only expect it to increase going forward. The next Polaris chip will be bigger than 11, that’s the secret he gave us.
There have been concerns that AMD was only going to go for the mainstream gaming market with Polaris but Raja promised me and our readers that we “would be really really pleased.” We expect to see Polaris-based GPUs across the entire performance stack.
- AMD’s primary goal here is to get many millions of gamers VR-ready, though getting the enthusiasts “that last millisecond” is still a goal and it will happen from Radeon.
- No solid date on Polaris parts at all – I tried! (Other than the launches start in June.) Though Raja did promise that after tonight, he will only have his next alcoholic beverage until the launch of Polaris. Serious commitment!
- Curious about the HBM2 inclusion in Vega on the roadmap and what that means for Polaris? Though he didn’t say it outright, it appears that Polaris will be using HBM1, leaving me to wonder about the memory capacity limitations inherent in that. Has AMD found a way to get past the 4GB barrier? We are trying to figure that out for sure.
Why is Polaris going to use HBM1? Raja pointed towards the extreme cost and expense of building the HBM ecosystem prepping the pipeline for the new memory technology as the culprit and AMD obviously wants to recoup some of that cost with another generation of GPU usage.
Speaking with Raja is always interesting and the confidence and knowledge he showcases is still what gives me assurance that the Radeon Technologies Group is headed in the correct direction. This is going to be a very interesting year for graphics, PC gaming and for GPU technologies, as showcased throughout the Capsaicin event, and I think everyone should be looking forward do it.
Podcast #390 - ASUS Z170 Sabertooth Mk1, Corsair Carbide 400C, more about Windows Store Games, and more!
Subject: General Tech | March 10, 2016 - 02:10 PM | Ken Addison
Tagged: podcast, video, asus, z170 sabertooth, corsair, carbide 400c, Windows Store, uwp, dx12, amd, nvidia, directflip, 16.3, 364.47, 364.51, SFX, Seagate, OCP, NVMe
PC Perspective Podcast #390 - 03/10/2016
Join us this week as we discuss the ASUS Z170 Sabertooth Mk1, Corsair Carbide 400C, more about Windows Store Games, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store (audio only)
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano
Program length: 1:12:32
Week in Review:
News items of interest:
Hardware/Software Picks of the Week
Ryan: Windows 10 Domain Networking and Shares
A start to proper testing
During all the commotion last week surrounding the release of a new Ashes of the Singularity DX12 benchmark, Microsoft's launching of the Gears of War Ultimate Edition on the Windows Store and the company's supposed desire to merge Xbox and PC gaming, a constant source of insight for me was one Andrew Lauritzen. Andrew is a graphics guru at Intel and has extensive knowledge of DirectX, rendering, engines, etc. and has always been willing to teach and educate me on areas that crop up. The entire DirectX 12 and Unified Windows Platform was definitely one such instance.
Yesterday morning Andrew pointed me to a GitHub release for a tool called PresentMon, a small sample of code written by a colleague of Andrew's that might be the beginnings of being able to properly monitor performance of DX12 games and even UWP games.
The idea is simple and it's implementation even more simple: PresentMon monitors the Windows event tracing stack for present commands and records data about them to a CSV file. Anyone familiar with the kind of ETW data you can gather will appreciate that PresentMon culls out nearly all of the headache of data gathering by simplifying the results into application name/ID, Present call deltas and a bit more.
Gears of War Ultimate Edition - the debated UWP version
The "Present" method in Windows is what produces a frame and shows it to the user. PresentMon looks at the Windows events running through the system, takes note of when those present commands are received by the OS for any given application, and records the time between them. Because this tool runs at the OS level, it can capture Present data from all kinds of APIs including DX12, DX11, OpenGL, Vulkan and more. It does have limitations though - it is read only so producing an overlay on the game/application being tested isn't possible today. (Or maybe ever in the case of UWP games.)
What PresentMon offers us at this stage is an early look at a Fraps-like performance monitoring tool. In the same way that Fraps was looking for Present commands from Windows and recording them, PresentMon does the same thing, at a very similar point in the rendering pipeline as well. What is important and unique about PresentMon is that it is API independent and useful for all types of games and programs.
PresentMon at work
The first and obvious question for our readers is how this performance monitoring tool compares with Frame Rating, our FCAT-based capture benchmarking platform we have used on GPUs and CPUs for years now. To be honest, it's not the same and should not be considered an analog to it. Frame Rating and capture-based testing looks for smoothness, dropped frames and performance at the display, while Fraps and PresentMon look at performance closer to the OS level, before the graphics driver really gets the final say in things. I am still targeting for universal DX12 Frame Rating testing with exclusive full screen capable applications and expect that to be ready sooner rather than later. However, what PresentMon does give us is at least an early universal look at DX12 performance including games that are locked behind the Windows Store rules.