Serious Sam VR, now with tag teaming NVIDIA cards

Subject: General Tech | November 30, 2016 - 03:31 PM |
Tagged: serious sam vr, nvidia, gaming, pascal

Having already looked at AMD's performance with two RX 480's in a system, the recent patch which enables support for multiple NVIDIA GPUs have dragged [H]ard|OCP back into the game.  Lacking a pair of Titan X cards, they tested the performance of a pair of GTX 1080s and 1070s; the GTX 1060 will not be receiving support from Croteam.  It would seem that adding a second Pascal card to your system will benefit you, however the scaling they saw was nowhere near as impressive as with the AMD RX 480 which saw a 36% boost.  Check out the full results here and yes ... in this case the m in mGPU indicates multiple GPUs, not mobile.

View Full Size

"Serious Sam VR was the first commercial enthusiast gaming title to include multi-GPU support with AMD's RX 480 GPU. Now the folks at Croteam have added mGPU support for NVIDIA cards as well. We take a look at how well NVIDIA's VRSLI technology fares in this VR shooter title."

Here is some more Tech News from around the web:

Gaming

Source: [H]ard|OCP

November 30, 2016 | 04:21 PM - Posted by Anonymous (not verified)

So HardOCP is testing GTX 1080/1070 multi-GPU and the scaling was not so hot. This is an asymc compute fully in the hardware advantage for AMD and not so much for Nvidia. I'm still going to give Nvidia time to get things better simply because of the DX12/Vulkan APIs ongoing conversion process for the games/gaming engine software/API ecosystem under DX12/Vulkan. I'll expect that AMD's milti-GPU scaling will continue to improve and Nvidia need some serious work time to get its multi-GPU scaling performing better.

Gamers need to ask Nvidia directly about any DX12/Vulkan explicit GPU multi-adaptor usage for the GTX 1060/1050 SKUs because that is supposed to be under the control of the game/gaming engine via the DX12/Vulkan APIs. Something is rotten in Denmark if Nvidia gets in the way of the DX12/Vulkan API's ability to make use of any and all GPU hardware plugged into a system.

November 30, 2016 | 11:30 PM - Posted by renz (not verified)

AFAIK nvidia did not block any of their card to work with DX12 multi gpu. hence we saw 1060 in "SLI" able to work in AoS DX12. but it still depends on how game developer implement their multi gpu in DX12. in DXMD for example developer did not do multi gpu optimization themselves. instead they were using existing profile that come from nvidia/AMD driver to make their multi gpu work in DX12. 1060 "SLI" did not work in DXMD DX12 because nvidia never made profile to begin with with 1060. instead of relying on AMD/nvidia driver developer should make their own optimization since they have full control over the implementation.

also when it comes to VR i think things like nvidia SMP is more clever solution than using multi gpu to increase the performance on VR titles. not many people can afford multi gpu system. and some people simply cannot justify multi gpu route when the support is limited only to certain games and not benefit all games.

December 1, 2016 | 08:40 AM - Posted by Anonymous (not verified)

"This is an asymc compute fully in the hardware advantage for AMD and not so much for Nvidia."

Hardware-based (as opposed to driver-based) Async shader dispatch is not some magic go-faster paste that can be smeared on any GPU function to improve performance.

December 1, 2016 | 07:42 PM - Posted by Anonymous (not verified)

Hardware based Async shader dispatch is much more responsive with much lower latency than any software based shader dispatch. It's also true for CPU cores that have simultaneous MultiThreading(SMT) hardware in the CPU’s core. Intel’s CPU cores with SMT(HyperThreading is Intel’s trade name for its brand of SMT) capability and can extract 15%-30% more work from Intel’s CPU cores because of the SMT hardware in the CPU’s core. No way would any software solution be able to respond quickly enough to the Intel CPU’s dynamically changing execution workloads in a small enough time frame using a software based solution, and its the same for any GPU processor’s shader cores.

SMT managed in a CPU’s hardware is a prime example of asynchronous compute fully in a CPU’s hardware, ditto for asynchronous compute managed fully in a GPU’s hardware for the many thousands of shaders on GPUs. Do you know how many CPU FP/Int/other pipeline cycles can occur in the time it takes to simply fetch one instruction from even the level one code cache. That SMT hardware in any SMT enabled CPU core must manage the 2 processor threads directly from the instruction queue and instruction buffer in the fastest part of the core that is above even the L1 instruction cache.

So for any sorts of GPU shader core management/instruction dispatch the hardware must be there in the GPU’s compute unit or shader cores management block to make sure that the shader cores are not idle while the work queues are backed up with work needing to be dispatched to the shader’s ALU/FP/Int other execution resources. Shader cores are just like CPU cores when it comes to the need to have the Dispatch/Management of many shader core’s FP/INT/Other units managed by hardware specifically engineered to perform dispatch and management of the many shader cores in the Compute Units or any equivalent structures on any makers’ GPU shader cores if they have the hardware based solution and do not use any software based solution.

Nvidia can do some management with software but that has to be only for some predictable execution workloads and Nvidia can utilize some latency hiding. But when things get dynamic and there are a lot of asynchronous events that can not be predicted then on a GPU without the fully in the hardware ability to respond quickly enough to unexpected asynchronous events, GPU execution resources will go underutilized with work still waiting in the shader/other work queues.

December 1, 2016 | 07:18 AM - Posted by Anonymous (not verified)

Sad for the 1060, but expected with no sli support from nvidia.

RX480 cards keep on getting more and more impressive by the day!