Subject: General Tech | November 23, 2016 - 06:39 PM | Jeremy Hellstrom
Tagged: nvidia, gears of war 4, gaming, dx12, async compute, amd
[H]ard|OCP sat down with the new DX12 based Gears of War 4 to test the performance of the game on a variety of cards, with a focus on the effect of enabling Async Compute. In their testing they found no reason for Async Compute to be disabled as it did not hurt the performance of any card. On the other hand NVIDIA's offerings do not benefit in any meaningful way from the feature and while AMD's cards certainly did, it was not enough to allow you to run everything at maximum on an RX 480. Overall the game was no challenge to any of the cards except perhaps the RX 460 and the GTX 1050 Ti. When playing at 4K resolution they saw memory usage in excess of 6GB, making the GTX 1080 the card for those who want to play with the highest graphical settings. Get more details and benchmarks in their full review.
"We take Gears of War 4, a new Windows 10 only game supporting DX12 natively and compare performance with seven video cards. We will find out which one provides the best experience at 4K, 1440p, and 1080p resolutions, and see how these compare to each other. We will also look specifically at the Async Compute feature."
Here is some more Tech News from around the web:
- Total War: WARHAMMER NVIDIA Linux Benchmarks @ Phoronix
- Total War: Warhammer’s Wood Elves like to shoot and run @ Rock, Paper, SHOTGUN
- Deus Ex: Mankind Divided DX12 Performance @ [H]ard|OCP
- Star Wars Battlefront’s Rogue One DLC on December 6th @ Rock, Paper, SHOTGUN
- AMD Radeon RX 470 Hitman Complete promo goes live @ HEXUS
- Shadow Tactics demo offers Commandos-y stealth @ Rock, Paper, SHOTGUN
- AMD & NVIDIA GPU VR Performance - Google Earth VR @ [H]ard|OCP
- Quick Look: Dark Souls III: Ashes of Ariandel @ GiantBomb
- Origin/EA Black Friday Sale
- AI War 2 returns to Kickstarter, smaller and cheaper @ Rock, Paper, SHOTGUN
Subject: General Tech | November 3, 2016 - 02:35 PM | Ryan Shrout
Tagged: vrm, video, skyrim, qualcomm, prodigy, powercolor, podcast, nxp, multi-gpu, msi, micron, logitech, GTX 1080, gtx 1070, g231, evga, dx12, devil box, deus ex: mankind divided, amd, Alienware 13
PC Perspective Podcast #423 - 11/03/16
Join us this week as we discuss the Logitech Prodigy G231, multi-GPU scaling with DX12, Qualcomm buying NXP, issues with GTX 1070 and 1080 cards and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Allyn Malventano, Josh Walrath, Jeremy Hellstrom
Program length: 1:10:25
Fragging Frogs VLAN 14 (summary)
Week in Review:
Today’s episode is brought to you by Harry’s! Use code PCPER at checkout!
News items of interest:
0:28:45 Qualcomm is going for a drive
Hardware/Software Picks of the Week
Jeremy: Need big long term storage
Subject: Graphics Cards | November 1, 2016 - 03:57 PM | Ryan Shrout
Tagged: video, rx 480, readon, nvidia, multi-gpu, gtx 1060, geforce, dx12, deus ex: mankind divided, amd
Last week a new update was pushed out to Deus Ex: Mankind Divided that made DX12 a part of the main line build and also integrated early support for multi-GPU support under DX12. I wanted to quickly see what kind of scaling it provided as we still have very few proof points on the benefit of running more than one graphics card with games utilizing the DX12 API.
As it turns out, the current build and driver combination only shows scaling on the AMD side of things. NVIDIA still doesn't have DX12 multi-GPU support enabled at this point for this title.
- Test System
- Core i7-5960X
- X99 MB + 16GB DDR4
- AMD Radeon RX 480 8GB
- Driver: 16.10.2
- NVIDIA GeForce GTX 1060 6GB
- Driver: 375.63
Not only do we see great scaling in terms of average frame rates, but using PresentMon for frame time measurment we also see that the frame pacing is consistent and provides the user with a smooth gaming experience.
Subject: Graphics Cards | October 20, 2016 - 12:08 AM | Scott Michaud
Tagged: amd, nvidia, gtx 1060, rx 480, dx12, dx11, battlefield 1
Battlefield 1 is just a few days from launching. In fact, owners of the Deluxe Edition have the game unlock yesterday. It's interesting that multiple publishers are using release date as a special edition bonus these days, including Microsoft's recent Windows Store releases. I'm not going to say interesting bad or good, though, because I'll leave that up to the reader to decide.
Anywho, DigitalFoundry is doing their benchmarking thing, and they wanted to see what GPU could provide a solid 60FPS when everything is maxed out (at 1080p). They start off with a DX12-to-DX12 comparison between the GTX 1060 and the RX 480. This is a relatively fair comparison, because the 3GB GTX 1060 and the 4GB RX 480 both come in at about $200, while upgrading to 6GB for the 1060 or 8GB for the 480 bumps each respective SKU up to the ~$250 price point. In this test, NVIDIA has a few dips slightly below 60 FPS in complex scenes, while AMD stays above that beloved threshold.
They also compare the two cards in DX11 and DX12 mode, with both cards using a Skylake-based Core i5 CPU. In this test, AMD's card noticed a nice increase in frame rate when switching to DirectX 12, while NVIDIA had a performance regression in the new API. This raises two questions, one of which is potentially pro-NVIDIA, and the other, pro-AMD. First, would the original test, if NVIDIA's card was allowed to use DirectX 11, show the GTX 1060 more competitive against the DX12-running RX 480? This brings me to the second question: what would the user see? A major draw of Mantle-based graphics APIs is that the application has more control over traditionally driver-level tasks. Would 60 FPS in DX12 be more smooth than 60 FPS in DX11?
I don't know. It's something we'll need to test.
Subject: General Tech | September 14, 2016 - 06:16 PM | Jeremy Hellstrom
Tagged: frame rating, deus ex: mankind divided, dx12, gaming
Just as we do here at PC Perspective, The Tech Report relies on rating frame times to provide accurate benchmarks as opposed to the raw number of frames per second a card provides. This means that their look at the new DX12 patch for Deus Ex focuses on different data which does not produce the same results as FRAPs would. This shows in their results, switching to DX12 results in much longer frame times in Deus Ex, with many spikes and a significant amount of frames that take more than 50ms to refresh. Drop by to see their full look here.
"An early version of Deus Ex: Mankind Divided's DirectX 12 rendering path is available now, and many sites and AMD itself are already producing average FPS numbers using that software. We go inside the second to see what the real story is."
Here is some more Tech News from around the web:
- Deus Ex: Mankind Divided - PC graphics performance @ Guru of 3D
- Deus Ex: Mankind Divided Review @ OCC
- Dishonored 2: Happy Hour With Corvo Attano @ Rock, Paper, SHOTGUN
- Humble Store End of Summer Sale
- Wot I Think: Master Of Orion @ Rock, Paper, SHOTGUN
- CryEngine 5.3 release to add Vulkan support this November @ HEXUS
- Quick Look: The Turing Test @ GiantBomb
- Endless Space 2 Hands On: Buying Planets As The Mafia-Like Lumeris @ Rock, Paper, SHOTGUN
- Battlefleet Gothic: Armada Opening Tau Beta Tomorrow @ Rock, Paper, SHOTGUN
Subject: General Tech | July 13, 2016 - 06:23 PM | Jeremy Hellstrom
Tagged: gaming, dx12, civilization VI, asynchronous compute, amd
AMD, 2K and Firaxis Games have been working together to bring the newest DX12 features to Civilization IV and today they have announced their success. The new game will incorporate Asynchronous Compute in the engine as well as support for Explicit Multi-Adapter for those with multiple GPUs. This should give AMD cards a significant performance boost when running the game, at least until NVIDIA can catch up with their support for the new technologies ... HairWorks is not going to have much as effect on your units as Async Compute will.
"Complete with support for advanced DirectX 12 features like asynchronous compute and explicit multi-adapter, PC gamers the world over will be treated to a high-performance and highly-parallelized game engine perfectly suited to sprawling, complex civilizations."
Here is some more Tech News from around the web:
- Battlefield 1’s New Medic Class Will Revive & Retaliate @ Rock, Paper, SHOTGUN
- Rise of the Tomb Raider gets improved DX12 multi-GPU support @ HEXUS
- Wot I Think: INSIDE @ Rock, Paper, SHOTGUN
- Support Sniper Ana Will be Overwatch's First New Hero @ GiantBomb
- Sega Acquire Amplitude, Will Publish Endless Space 2 @ Rock, Paper, SHOTGUN
- hostbusters (2016) @ Polygon
- Long War Studios Release New XCOM 2 Mods @ Rock, Paper, SHOTGUN
Subject: Processors | June 27, 2016 - 06:40 PM | Jeremy Hellstrom
Tagged: dx12, 6700k, Intel, i7-6950X
[H]ard|OCP has been conducting tests using a variety of CPUs to see how well DX12 distributes load between cores as compared to DX11. Their final article which covers the 6700K and 6950X was done a little differently and so cannot be directly compared to the previously tested CPUs. That does not lower the value of the testing, scaling is still very obvious and the new tests were designed to highlight more common usage scenarios for gamers. Read on to see how well, or how poorly, Ashes of the Singularity scales when using DX12.
"This is our fourth and last installment of looking at the new DX12 API and how it works with a game such as Ashes of the Singularity. We have looked at how DX12 is better at distributing workloads across multiple CPU cores than DX11 in AotS when not GPU bound. This time we compare the latest Intel processors in GPU bound workloads."
Here are some more Processor articles from around the web:
Subject: Graphics Cards | April 29, 2016 - 11:09 PM | Jeremy Hellstrom
Tagged: amd, dx12, async shaders
Earlier in the month [H]ard|OCP investigated the performance scaling that Intel processors display in DX12, now they have finished their tests on AMD processors. These tests include Async computing information, so be warned before venturing forth into the comments. [H] tested an FX 8370 at 2GHz and 4.3GHz to see what effect this had on the games, the 3GHz tests did not add any value and were dropped in favour of these two turbo frequencies. There are some rather interesting results and discussion, drop by for the details.
"One thing that has been on our minds about the new DX12 API is its ability to distribute workloads better on the CPU side. Now that we finally have a couple of new DX12 games that have been released to test, we spend a bit of time getting to bottom of what DX12 might be able to do for you. And a couple sentences on Async Compute."
Here are some more Graphics Card articles from around the web:
Subject: General Tech | April 20, 2016 - 07:14 PM | Jeremy Hellstrom
Tagged: hitman 2016, gaming, dx12, asynchronous compute, ashes of the singularity
DX12 is very new and with these two games utilizing it, Hitman 2016 and Ashes of the Singularity, it is difficult to get a good sample of results to see exactly what the new API will offer. [H]ard|OCP have been working with both of these games to determine the performance differences between DX11 and DX12 and to find where the bottlenecks, if any, are. With Ashes they tried limiting the CPU, one set of tests at 1.2GHz and the second at 4.5GHz which showed how well DX12 lived up to the touted benefits of reduced CPU usage. They also tested with older GPUs on a 4.5GHz CPU to see if the new API does indeed help out older GPUs. They also delve somewhat into the confusion surrounding AMD's Asynchronous ace in the hole.
For Hitman they contrasted various GPUs from both AMD and NVIDIA while leaving the CPU alone for the testing. This review emphasizes the performance delta between DX11 and DX12 on the same GPUs, and unfortunately also addresses some stability issues which DX12 has brought with it. Read through the review to see what results they gathered so far but do not consider this the final word since both NVIDIA and AMD's GPUs could barely manage 10 minutes of DX12 gaming before completely locking up.
We still have a lot more investigation to perform before we can define the strengths and weaknesses of DX12.
"Hitman (2016) supports the new DirectX 12 API. We will take this game and find out if DX12 is faster than DX11 and what it may offer in this game and if it allows a better gameplay experience. We will also compare Himan performance between several video cards to find what is playable and how AMD vs. NV GPUs compare."
Here is some more Tech News from around the web:
- Battlezone 98 Redux Brings Back FPS-RTS Fun @ Rock, Paper, SHOTGUN
- Total Warcraft: Hammers Offers Zone News, I Think @ Rock, Paper, SHOTGUN
- Ashes of the Singularity Review @ OCC
- XCOM Beyond Earth: Shock Tactics @ Rock, Paper, SHOTGUN
- Mafia III will be released on 7th October on PC, PS4 and Xbox One @ HEXUS
- Ratchet & Clank @ Polygon
- Vroomshakalaka! Rocket League Hoops Mode Next Week @ Rock, Paper, SHOTGUN
- Quick Look: Banner Saga 2 @ Giant Bomb
Subject: Graphics Cards | April 14, 2016 - 10:17 PM | Scott Michaud
Tagged: microsoft, windows 10, uwp, DirectX 12, dx12
At the PC Gaming Conference from last year's E3 Expo, Microsoft announced that they were looking to bring more first-party titles to Windows. They used to be one of the better PC gaming publishers, back in the Mechwarrior 4 and earlier Flight Simulator days, but they got distracted as Xbox 360 rose and Windows Vista fell.
Again, part of that is because they attempted to push users to Windows Vista and Games for Windows Live, holding back troubled titles like Halo 2: Vista and technologies like DirectX 10 from Windows XP, which drove users to Valve's then-small Steam platform. Epic Games was also a canary in the coalmine at that time, warning users that Microsoft was considering certification for Games for Windows Live, which threatened mod support “because Microsoft's afraid of what you might put into it”.
It's sometimes easy to conform history to fit a specific viewpoint, but it does sound... familiar.
Anyway, we're glad that Microsoft is bringing first-party content to the PC, and they are perfectly within their rights to structure it however they please. We are also within our rights to point out its flaws and ask for them to be corrected. Turns out that Quantum Break, like Gears of War before it, has some severe performance issues. Let's be clear, these will likely be fixed, and I'm glad that Microsoft didn't artificially delay the PC version to give the console an exclusive window. Also, had they delayed the PC version until it was fixed, we wouldn't have known whether it needed the time.
Still, the game apparently has issues with a 50 FPS top-end cap, on top of pacing-based stutters. One concern that I have is, because DigitalFoundry is a European publication, perhaps the 50Hz issue might be caused by their port being based on a PAL version of the game??? Despite suggesting it, I would be shocked if that were the case, but I'm just trying to figure out why anyone would create a ceiling at that specific interval. They are also seeing NVIDIA's graphics drivers frequently crash, which probably means that some areas of their DirectX 12 support are not quite what the game expects. Again, that is solvable by drivers.
It's been a shaky start for both DirectX 12 and the Windows 10 UWP platform. We'll need to keep waiting and see what happens going forward. I hope this doesn't discourage Microsoft too much, but also that they robustly fix the problems we're discussing.