Subject: Graphics Cards | February 22, 2016 - 06:03 PM | Ryan Shrout
Tagged: vive, valve, steamvr, steam, rift, performance test, Oculus, htc
Though I am away from my stacks of hardware at the office attending Mobile World Congress in Barcelona, Valve dropped a bomb on us today in the form of a new hardware performance test that gamers can use to determine if they are ready for the SteamVR revolution. The aptly named "SteamVR Performance Test" is a free title available through Steam that any user can download and run to get a report card on their installed hardware. No VR headset required!
And unlike the Oculus Compatibility Checker, the application from Valve runs actual game content to measure your system. Oculus' app only looks at the hardware on your system for certification, not taking into account the performance of your system in any way. (Overclockers and users with Ivy Bridge Core i7 processors have been reporting failed results on the Oculus test for some time.)
The SteamVR Performance Test runs a set of scenes from the Aperture Science Robot Repair demo, an experience developed directly for the HTC Vive and one that I was able to run through during CES last month. Valve is using a very interesting new feature called "dynamic fidelity" that adjusts image quality of the game in a way to avoid dropped frames and frame rates under 90 FPS in order to maintain a smooth and comfortable experience for the VR user. Though it is the first time I have seen it used, it sounds similar to what John Carmack did with the id Tech 5 engine, attempting to balance performance on hardware while maintaining a targeted frame rate.
The technology could be a perfect match for VR content where frame rates above or at the 90 FPS target are more important than visual fidelity (in nearly all cases). I am curious to see how Valve may or may not pursue and push this technology in its own games and for the Vive / Rift in general. I have some questions pending with them, so we'll see what they come back with.
A result for a Radeon R9 Fury provided by AMD
Valve's test offers a very simple three tiered breakdown for your system: Not Ready, Capable and Ready. For a more detailed explanation you can expand on the data to see metrics like the number of frames you are CPU bound on, frames below the very important 90 FPS mark and how many frames were tested in the run. The Average Fidelity metric is the number that we are reporting below and essentially tells us "how much quality" the test estimates you can run at while maintaining that 90 FPS mark. What else that fidelity result means is still unknown - but again we are trying to find out. The short answer is that the higher that number goes, the better off you are, and the more demanding game content you'll be able to run at acceptable performance levels. At least, according to Valve.
Because I am not at the office to run my own tests, I decided to write up this story using results from a third part. That third party is AMD - let the complaining begin. Obviously this does NOT count as independent testing but, in truth, it would be hard to cheat on these results unless you go WAY out of your way to change control panel settings, etc. The demo is self run and AMD detailed the hardware and drivers used in the results.
- Intel i7-6700K
- 2x4GB DDR4-2666 RAM
- Z170 motherboard
- Radeon Software 16.1.1
- NVIDIA driver 361.91
- Win10 64-bit
|2x Radeon R9 Nano||11.0|
|GeForce GTX 980 Ti||11.0|
|Radeon R9 Fury X||9.6|
|Radeon R9 Fury||9.2|
|GeForce GTX 980||8.1|
|Radeon R9 Nano||8.0|
|Radeon R9 390X||7.8|
|Radeon R9 390||7.0|
|GeForce GTX 970||6.5|
These results were provided by AMD in an email to the media. Take that for what you will until we can run our own tests.
First, the GeForce GTX 980 Ti is the highest performing single GPU tested, with a score of 11 - because of course it goes to 11. The same score is reported on the multi-GPU configuration with two Radeon R9 Nanos so clearly we are seeing a ceiling of this version of the SteamVR Performance Test. With a single GPU score of 9.2, that is only a 19% scaling rate, but I think we are limited by the test in this case. Either way, it's great news to see that AMD has affinity multi-GPU up and running, utilizing one GPU for each eye's rendering. (AMD pointed out that users that want to test the multi-GPU implementation will need to add the -multigpu launch option.) I still need to confirm if GeForce cards scale accordingly. UPDATE: Ken at the office ran a quick check with a pair of GeForce GTX 970 cards with the same -multigpu option and saw no scaling improvements. It appears NVIDIA has work to do here.
Moving down the stack, its clear why AMD was so excited to send out these early results. The R9 Fury X and R9 Fury both come out ahead of the GeForce GTX 980 while the R9 Nano, R9 390X and R9 390 result in better scores than NVIDIA's GeForce GTX 970. This comes as no surprise - AMD's Radeon parts tend to offer better performance per dollar when it comes to benchmarks and many games.
There is obviously a lot more to consider than the results this SteamVR Performance Test provides when picking hardware for a VR system, but we are glad to see Valve out in front of the many, many questions that are flooding forums across the web. Is your system ready??
Subject: Graphics Cards | February 19, 2016 - 07:11 PM | Scott Michaud
Tagged: vulkan, linux
Update: Venn continued to benchmark and came across a few extra discoveries. For example, he disabled VDPAU and jumped to 89.6 FPS in OpenGL and 80.6 FPS in Vulkan. Basically, be sure to read the whole thread. It might be updated further even. Original post below (unless otherwise stated).
On Windows, the Vulkan patch of The Talos Principle leads to a net loss in performance, relative to DirectX 11. This is to be expected when a developer like Croteam optimizes their game for existing APIs, and tries to port all that work to a new, very different standard, with a single developer and three months of work. They explicitly state, multiple times, not to expect good performance.
Image Credit: Venn Stone of LinuxGameCast
On Linux, Venn Stone of LinuxGameCast found different results. With everything maxed out at 1080p, his OpenGL benchmark reports 38.2 FPS, while his Vulkan raises this to an average of 66.5 FPS. Granted, this was with an eight-core AMD FX-8150, which launched with the Bulldozer architecture back in 2011. It did not have the fastest single-threaded performance, falling behind even AMD's own Phenom II parts before it in that regard.
Still, this is a scenario that allowed the game to scale to Bulldozer's multiple cores and circumvent a lot of the driver overhead in OpenGL. It resulted in a 75% increase in performance, at least for people who pair a GeForce 980
Ti ((Update: The Ti was a typo. Venn uses a standard GeForce GTX 980.)) with an eight-core, Bulldozer CPU from 2011.
Subject: Graphics Cards | February 16, 2016 - 12:01 PM | Sebastian Peak
Tagged: rumor, report, nvidia, Maxwell 2.0, GTX 950 SE, GTX 950 LP, gtx 950, gtx 750, graphics card, gpu
A report from VideoCardz.com claims that NVIDIA is working on another GTX 950 graphics card, but not the 950 Ti you might have expected.
Reference GTX 950 (Image credit: NVIDIA)
While the GTX 750 Ti was succeeded by the GTX 950 in August of last year, the higher specs for this new GPU came at the cost of a higher TDP (90W vs. 60W). This new rumored GTX 950, which might be called either 950 SE or 950 LP according to the report, would be a lower power version of the GTX 950, and would actually have a lot more in common with the outgoing GTX 750 Ti than the plain GTX 750 as we can see from this chart:
(Image credit: VideoCardz)
As you can see the GTX 750 Ti is based on GM107 (Maxwell 1.0) and has 640 CUDA cores, 40 TUs, 16 ROPs, and it operates at 1020 MHz Base/1085 MHz Boost clocks. The reported specs of this new GTX 950 SE/LP would be nearly identical, though based on GM206 (Maxwell 2.0) and offering greater memory bandwidth (and slightly higher power consumption).
The VideoCardz report was sourced from Expreview, which claimed that this GTX 950 SE/LP product would arrive next month at some point. This report is a little more vague than some of the rumors we see, but it could very well be that NVIDIA has a planned replacement for the remaining Maxwell 1.0 products on the market. I would have personally expected to see a"Ti” product before any “LE/LP” version of the GTX 950, and this reported name seems more like an OEM product than a retail part. We will have to wait and see if this report is accurate.
Caught Up to DirectX 12 in a Single Day
I'm not just talking about the specification. Members of the Khronos Group have also released compatible drivers, SDKs and tools to support them, conformance tests, and a proof-of-concept patch for Croteam's The Talos Principle. To reiterate, this is not a soft launch. The API, and its entire ecosystem, is out and ready for the public on Windows (at least 7+ at launch but a surprise Vista or XP announcement is technically possible) and several distributions of Linux. Google will provide an Android SDK in the near future.
I'm going to editorialize for the next two paragraphs. There was a concern that Vulkan would be too late. The thing is, as of today, Vulkan is now just as mature as DirectX 12. Of course, that could change at a moment's notice; we still don't know how the two APIs are being adopted behind the scenes. A few DirectX 12 titles are planned to launch in a few months, but no full, non-experimental, non-early access game currently exists. Each time I say this, someone links the Wikipedia list of DirectX 12 games. If you look at each entry, though, you'll see that all of them are either: early access, awaiting an unreleased DirectX 12 patch, or using a third-party engine (like Unreal Engine 4) that only list DirectX 12 as an experimental preview. No full, released, non-experimental DirectX 12 game exists today. Besides, if the latter counts, then you'll need to accept The Talos Principle's proof-of-concept patch, too.
But again, that could change. While today's launch speaks well to the Khronos Group and the API itself, it still needs to be adopted by third party engines, middleware, and software. These partners could, like the Khronos Group before today, be privately supporting Vulkan with the intent to flood out announcements; we won't know until they do... or don't. With the support of popular engines and frameworks, dependent software really just needs to enable it. This has not happened for DirectX 12 yet, and, now, there doesn't seem to be anything keeping it from happening for Vulkan at any moment. With the Game Developers Conference just a month away, we should soon find out.
But back to the announcement.
Vulkan-compatible drivers are launching today across multiple vendors and platforms, but I do not have a complete list. On Windows, I was told to expect drivers from NVIDIA for Windows 7, 8.x, 10 on Kepler and Maxwell GPUs. The standard is compatible with Fermi GPUs, but NVIDIA does not plan on supporting the API for those users due to its low market share. That said, they are paying attention to user feedback and they are not ruling it out, which probably means that they are keeping an open mind in case some piece of software gets popular and depends upon Vulkan. I have not heard from AMD or Intel about Vulkan drivers as of this writing, one way or the other. They could even arrive day one.
On Linux, NVIDIA, Intel, and Imagination Technologies have submitted conformant drivers.
Drivers alone do not make a hard launch, though. SDKs and tools have also arrived, including the LunarG SDK for Windows and Linux. LunarG is a company co-founded by Lens Owen, who had a previous graphics software company that was purchased by VMware. LunarG is backed by Valve, who also backed Vulkan in several other ways. The LunarG SDK helps developers validate their code, inspect what the API is doing, and otherwise debug. Even better, it is also open source, which means that the community can rapidly enhance it, even though it's in a releasable state as it is. RenderDoc,
the open-source graphics debugger by Crytek, will also add Vulkan support. ((Update (Feb 16 @ 12:39pm EST): Baldur Karlsson has just emailed me to let me know that it was a personal project at Crytek, not a Crytek project in general, and their GitHub page is much more up-to-date than the linked site.))
The major downside is that Vulkan (like Mantle and DX12) isn't simple.
These APIs are verbose and very different from previous ones, which requires more effort.
Image Credit: NVIDIA
There really isn't much to say about the Vulkan launch beyond this. What graphics APIs really try to accomplish is standardizing signals that enter and leave video cards, such that the GPUs know what to do with them. For the last two decades, we've settled on an arbitrary, single, global object that you attach buffers of data to, in specific formats, and call one of a half-dozen functions to send it.
Compute APIs, like CUDA and OpenCL, decided it was more efficient to handle queues, allowing the application to write commands and send them wherever they need to go. Multiple threads can write commands, and multiple accelerators (GPUs in our case) can be targeted individually. Vulkan, like Mantle and DirectX 12, takes this metaphor and adds graphics-specific instructions to it. Moreover, GPUs can schedule memory, compute, and graphics instructions at the same time, as long as the graphics task has leftover compute and memory resources, and / or the compute task has leftover memory resources.
This is not necessarily a “better” way to do graphics programming... it's different. That said, it has the potential to be much more efficient when dealing with lots of simple tasks that are sent from multiple CPU threads, especially to multiple GPUs (which currently require the driver to figure out how to convert draw calls into separate workloads -- leading to simplifications like mirrored memory and splitting workload by neighboring frames). Lots of tasks aligns well with video games, especially ones with lots of simple objects, like strategy games, shooters with lots of debris, or any game with large crowds of people. As it becomes ubiquitous, we'll see this bottleneck disappear and games will not need to be designed around these limitations. It might even be used for drawing with cross-platform 2D APIs, like Qt or even webpages, although those two examples (especially the Web) each have other, higher-priority bottlenecks. There are also other benefits to Vulkan.
The WebGL comparison is probably not as common knowledge as Khronos Group believes.
Still, Khronos Group was criticized when WebGL launched as "it was too tough for Web developers".
It didn't need to be easy. Frameworks arrived and simplified everything. It's now ubiquitous.
In fact, Adobe Animate CC (the successor to Flash Pro) is now a WebGL editor (experimentally).
Open platforms are required for this to become commonplace. Engines will probably target several APIs from their internal management APIs, but you can't target users who don't fit in any bucket. Vulkan brings this capability to basically any platform, as long as it has a compute-capable GPU and a driver developer who cares.
Thankfully, it arrived before any competitor established market share.
Subject: Graphics Cards | February 10, 2016 - 05:59 PM | Scott Michaud
Tagged: VR, vive vr, Oculus, evga, 980 Ti
You might wonder what makes a graphics card “designed for VR,” but this is actually quite interesting. Rather than plugging your headset into the back of your desktop, EVGA includes a 5.25” bay that provides 2x USB 3.0 ports and 1x HDMI 2.0 connection. The use case is that some users will want to easily connect and disconnect their VR devices, which, knowing a few indie VR developers, seems to be a part of their workflow. The same may be true of gamers, but I'm not sure.
While the bay allows for everything, including the HDMI plug via an on-card port, to be connected internally, you will need a spare USB 3.0 header on your motherboard to hook it up. It would have been interesting to see whether EVGA could have attached a USB 3.0 controller on the add-in board, but that might have been impossible (or unpractical) given that the PCIe connector would need to be shared with the GPU (not to mention the complexity of also adding a USB 3.0 controller to the board). Also, I expect motherboards should have at least one. If not, you can find USB 3.0 add-in cards with internal headers.
The card comes in two sub-versions, one with the NVIDIA-style blower cooler, and the other with EVGA's ACX 2.0+ cooler. I tend to prefer exposed fan GPUs because they're easier to blow air into after a few years, but you might have other methods to control dust.
Both are currently available for $699.99 on Newegg.com, while Amazon only lists the ACX2.0+ cooler version, and that's out of stock. It is also $699.99, though, so that should be what to expect.
Early testing for higher end GPUs
UPDATE 2/5/16: Nixxes released a new version of Rise of the Tomb Raider today with some significant changes. I have added another page at the end of this story that looks at results with the new version of the game, a new AMD driver and I've also included some SLI and CrossFire results.
I will fully admit to being jaded by the industry on many occasions. I love my PC games and I love hardware but it takes a lot for me to get genuinely excited about anything. After hearing game reviewers talk up the newest installment of the Tomb Raider franchise, Rise of the Tomb Raider, since it's release on the Xbox One last year, I've been waiting for its PC release to give it a shot with real hardware. As you'll see in the screenshots and video in this story, the game doesn't appear to disappoint.
Rise of the Tomb Raider takes the exploration and "tomb raiding" aspects that made the first games in the series successful and applies them to the visual quality and character design brought in with the reboot of the series a couple years back. The result is a PC game that looks stunning at any resolution, but even more so in 4K, that pushes your hardware to its limits. For single GPU performance, even the GTX 980 Ti and Fury X struggle to keep their heads above water.
In this short article we'll look at the performance of Rise of the Tomb Raider with a handful of GPUs, leaning towards the high end of the product stack, and offer up my view on whether each hardware vendor is living up to expectations.
Subject: Graphics Cards | February 4, 2016 - 05:51 PM | Jeremy Hellstrom
Tagged: gainward, GTX 960 Phantom 4GB. gtx 960, NVIDA, 4GB
If you don't have a lot of cash on hand for games or hardware, a 4k adaptive sync monitor with two $600 GPUs and a collection of $80 AAA titles simply isn't on your radar. That doesn't mean you have to toss in your love of gaming for occasional free to play gaming sessions; you just have to adapt. A prime example are those die hard Skyrim fans who have modded the game to oblivion over the past few years, with many other games and communities that may not be new but are still thriving. Chances are that you are playing at 1080p so a high powered GPU is not needed, however mods that upscale textures and many others do love huge tracts of RAM.
So for those outside of North America looking for a card they can afford after a bit of penny pinching, check out Legion Hardware's review of the 4GB version of the Gainward GTX 960 Phantom. It won't break any benchmarking records but it will let you play the games you love and even new games as their prices inevitably decrease over time.
Today we are checking out Gainward’s premier GeForce GTX 960 graphics card, the Phantom 4GB. Equipped with twice the memory buffer of standard cards, it is designed for extreme 1080p gaming. Therefore it will be interesting to see how the Phantom 4GB compares to a 2GB GTX 960..."
Here are some more Graphics Card articles from around the web:
- GIGABYTE GTX 980 Ti G1 Gaming Review @ Hardware Canucks
- Inno3D GeForce GTX 980Ti X3 Ultra DHS @ eTeknix
- Desktop Graphics Card Comparison Guide @ TechARP
- Sapphire Nitro R9 Fury OC 4GB @ Kitguru
Subject: Graphics Cards | February 3, 2016 - 02:37 AM | Tim Verry
Tagged: virtual machines, virtual graphics, mxgpu, gpu virtualization, firepro, amd
AMD made an interesting enterprise announcement today with the introduction of new FirePro S-Series graphics cards that integrate hardware-based virtualization technology. The new FirePro S1750 and S1750 x2 are aimed at virtualized workstations, render farms, and cloud gaming platforms where each virtual machine has direct access to the graphics hardware.
The new graphics cards use a GCN-based Tonga GPU with 2,048 stream processors paired with 8GB of ECC GDDR5 memory on the single slot FirePro S1750. The dual slot FirePro S1750 x2, as the name suggests, is a dual GPU card that features a total of 4,096 shaders (2,048 per GPU) and 16 GB of ECC GDDR5 (8 GB per GPU). The S1750 has a TDP of 150W while the dual-GPU S1750 x2 variant is rated at 265W and either can be passively cooled.
Where the graphics cards get niche is the inclusion of what AMD calls MxGPU (Multi-User GPU) technology which is derived from the SR-IOV (Single Root Input/Output Virtualization) PCI-Express standard. According to AMD, the new FirePro S-Series allows virtual machines direct access to the full range of GPU hardware (shaders, memory, ect.) and OpenCL 2.0 support on the software side. The S1750 supports up to 16 simultaneous users and the S1750 x2 tops out at 32 users. Each virtual machine is allocated an equal slice of the GPU, and as you add virtual machines the equal slices get smaller. AMD’s solution to that predicament is to add more GPUs to spread out the users and allocate each VM more hardware horsepower. It is worth noting that AMD has elected not to charge companies any per-user licensing fees for all these VMs the hardware supports which should make these cards more competitive.
The graphics cards use ECC memory to correct errors when dealing with very large numbers and calculations and every VM is reportedly protected and isolated such that one VM can not access any data of a different VM stored in graphics memory.
I am interested to see how these stack up compared to NVIDIA’s GRID and VGX GPU virtualization specialized graphics cards. The difference between the software versus hardware-based virtualization may not make much difference, but AMD’s approach may be every so slightly more efficient with the removal of layer between the virtual machine and hardware. We’ll have to wait and see, however.
Enterprise users will be able to pick up the new cards installed in systems from server manufacturers sometime in the first half of 2016. Pricing for the cards themselves appears to be $2,399 for the single GPU S1750 and $3,999 for the dual GPU S1750 x2.
Needless to say, this is all a bit more advanced (and expensive!) than the somewhat finicky 3D acceleration option desktop users can turn on in VMWare and VirtualBox! Are you experimenting with remote workstations and virtual machines for thin clients that can utilize GPU muscle? Does AMD’s MxGPU approach seem promising?
Subject: General Tech, Graphics Cards, Motherboards, Cases and Cooling | February 2, 2016 - 02:07 PM | Ryan Shrout
Tagged: Z170, PSU, power supply, motherboard, GTX 970, giveaway, ftw, evga, contest
For many of you reading this, the temperature outside has fallen to its deepest levels, making it hard to even bare the thought of going outdoors. What would help out a PC enthusiast and gamer in this situation? Some new hardware, delivered straight to your door, to install and assist in warming up your room, that's what!
PC Perspective has partnered up with EVGA to offer up three amazing prizes for our fans. They include a 750 G2 power supply (obviously with a 750 watt rating), a Z170 FTW motherboard and a GTX 970 SSC Gaming ACX 2.0+ graphics card. The total prize value is over $650 based on MSRPs!
All you have to do to enter is follow the easy steps in the form below.
We want to thank EVGA for its support of PC Perspective in this contest and over the years. Here's to a great 2016 for everyone!
Subject: Graphics Cards | January 25, 2016 - 03:19 PM | Jeremy Hellstrom
Tagged: XFX R9 380 Double Dissipation Black Edition OC 4GB, xfx, gtx 960
In one corner is the XFX R9 380 DD Black Edition OC 4GB, at factory settings and with an overclock of 1170MHz core and 6.4GHz memory and in the other corner is a GTX 960 with a 1178MHz Boost clock and 7GHz memory. These two contenders will compete in a six round 1080p match featuring Fallout 4, Project Cars, Witcher 3, GTAV, Dying Light and BF4 to see which is worthy of your hard earned buckaroos. Your referee for today will be [H]ard|OCP, tune in to see the final results.
"Today we evaluate a custom R9 380 from XFX, the XFX R9 380 DD BLACK EDITION OC 4GB. Sporting a hefty factory overclock and the Ghost Thermal 3.0 custom cooling with Double Dissipation, we compare it to an equally priced reference GeForce GTX 960. Find out which video card provides the better bargain."
Here are some more Graphics Card articles from around the web:
- ASUS STRIX R9 380X DirectCU II OC 1080p @ [H]ard|OCP
- Sapphire R9 390 Nitro 8 GB @ techPowerUp
- The OpenGL Speed & Perf-Per-Watt From The Radeon HD 2000/3000 Series Through The R9 Fury @ Phoronix
- 1080p NVIDIA Linux Comparison From GeForce 8 To GeForce 900 Series @ Phoronix