Vulkan is not extinct, in fact it might be about to erupt

Subject: General Tech | February 15, 2017 - 01:29 PM |
Tagged: vulkan, Intel, Intel Skylake, kaby lake

The open source API, Vulkan, just received a big birthday present from Intel as they added official support on their Skylake and Kaby Lake CPUs under Windows 10.  We have seen adoption of this API from a number of game engine designers, Unreal Engine and Unity have both embraced it, the latest DOOM release was updated to support Vulkan and there is even a Nintendo 64 renderer which runs on it.  Ars Technica points out that both AMD and NVIDIA have been supporting this API for a while and that we can expect to see Android implementations of this close to the metal solution in the near future.

khronos-2016-vulkanlogo2.png

"After months in beta, Intel's latest driver for its integrated GPUs (version 15.45.14.4590) adds support for the low-overhead Vulkan API for recent GPUs running in Windows 10. The driver supports HD and Iris 500- and 600-series GPUs, the ones that ship with 6th- and 7th-generation Skylake and Kaby Lake processors."

Here is some more Tech News from around the web:

Tech Talk

Source: Ars Technica

WebKit Proposal for WebGPU

Subject: General Tech | February 8, 2017 - 10:46 PM |
Tagged: webkit, webgpu, metal, vulkan, webgl

Apple’s WebKit team has just announced their proposal for WebGPU, which competes with WebGL to provide graphics and GPU compute to websites. Being from Apple, it is based on the Metal API, so it has a lot of potential, especially as a Web graphics API.

Okay, so I have mixed feelings about this announcement.

apple-2017-webkit-logo.png

First, and most concerning, is that Apple has attempted to legally block open standards in the past. For instance, when The Khronos Group created WebCL based on OpenCL, which Apple owns the trademark and several patents to, Apple shut the door on extending their licensing agreement to the new standard. If the W3C considers Apple’s proposal, they should be really careful about what legal control they allow Apple to retain.

From a functionality standpoint, this is very interesting, though. With the aforementioned death of WebCL, as well as the sluggish progress of WebGL Compute Shaders, there’s a lot of room to use one (or more) GPUs in a system for high-end compute tasks. Even if you are not interested in gaming on a web browser, although many people are, especially if you count the market that Adobe Flash dominated for the last ten years, you might want to GPU-accelerate photo and video tasks. Having an API that allows for this would be very helpful going forward, although, as stated, others are working on it, like The Khronos Group with WebGL compute shaders. On the other-other hand, an API that allows explicit multi-GPU would be even more interesting.

Further, it sounds like they’re even intending to ingest byte-code, like what DirectX 12 and Vulkan are doing with DXIL and SPIR-V, respectively, but it currently accepts shader code as a string and compiles it in the driver. This is interesting from a security standpoint, because it obfuscates what GPU-executed code consists of, but that’s up to the graphics and browser vendors to figure out... for now.

So when will we see it? No idea! There’s an experimental WebKit patch, which requires the Metal API, and an API proposal... a couple blog posts... a tweet or two... and that’s about it.

So what do you think? Does the API sound interesting? Does Apple’s involvement scare you? Or does getting scared about Apple’s involvement annoy you? Comment away! : D

Source: WebKit

NVIDIA Releases Vulkan Developer 376.80 Beta Drivers

Subject: Graphics Cards | February 3, 2017 - 05:58 PM |
Tagged: nvidia, graphics drivers, vulkan

On February 1st, NVIDIA released a new developer beta driver, which fixes a couple of issues with their Vulkan API implementation. Unlike what some sites have been reporting, you should not download it to play games that use the Vulkan API, like DOOM. In short, it is designed for developers, not end-users. The goal is to provide correct results when software interacts with the driver, not the best gaming performance or anything like that.

khronos-2016-vulkanlogo2.png

In a little more detail, it looks like 376.80 implements the Vulkan 1.0.39.1 SDK. This update addresses two issues with accessing devices and extensions, under certain conditions, when using the 1.0.39.0 SDK. 1.0.39.0 was released on January 23rd, and thus it will not even be a part of current video games. Even worse, it, like most graphics drivers for software developers, is based on the old, GeForce 376 branch, so it won’t even have NVIDIA’s most recent fixes and optimizations. NVIDIA does this so they can add or change the features that Vulkan developers require without needing to roll-in patches every time they make a "Game Ready" optimization or something. There is no reason to use this driver unless you are developing Vulkan applications, and you want to try out the new extensions. It will eventually make it to end users... when it's time.

If you are wishing to develop software using Vulkan’s bleeding-edge features, then check out NVIDIA’s developer portal to pick up the latest drivers. Basically everyone else should use 378.49 or its 378.57 hotfix.

Source: NVIDIA

DirectX Intermediate Language Announced... via GitHub

Subject: Graphics Cards | January 27, 2017 - 09:19 PM |
Tagged: microsoft, DirectX, llvm, dxil, spir-v, vulkan

Over the holidays, Microsoft has published the DirectX Shader Compiler onto GitHub. The interesting part about this is that it outputs HLSL into DirectX Intermediate Language (DXIL) bytecode, which can be ingested by GPU drivers and executed on graphics devices. The reason why this is interesting is that DXIL is based on LLVM, which might start to sound familiar if you have been following along with The Khronos Group and their announcements regarding Vulkan, OpenCL, and SPIR-V.

As it turns out, they were on to something, and Microsoft is working on a DirectX analogue of it.

microsoft-2015-directx12-logo.jpg

The main advantage of LLVM-based bytecode is that you can eventually support multiple languages (and the libraries of code developed in them). When SPIR-V was announced with Vulkan, the first thing that came to my mind was compiling to it from HLSL, which would be useful for existing engines, as they are typically written in HLSL and transpiled to the target platform when used outside of DirectX (like GLSL for OpenGL). So, in Microsoft’s case, it would make sense that they start there (since they own the thing) but I doubt that is the end goal. The most seductive outcome for game engine developers would be single-source C++, but there is a lot of steps between there and here.

Another advantage, albeit to a lesser extent, is that you might be able to benefit from performance optimizations, both on the LLVM / language side as well as on the driver’s side.

According to their readme, the minimum support will be HLSL Shader Model 6. This is the most recent shading model, and it introduces some interesting instructions, typically for GPGPU applications, that allow multiple GPU threads to interact, like balloting. Ironically, while DirectCompute and C++AMP don’t seem to be too popular, this would nudge DirectX 12 into a somewhat competent GPU compute API.

DXIL support is limited to Windows 10 Build 15007 and later, so you will need to either switch one (or more) workstation(s) to Insider, or wait until it launches with the Creators Update (unless something surprising holds it back).

Unity 5.6 Beta Supports Vulkan API

Subject: General Tech | December 22, 2016 - 06:54 PM |
Tagged: pc gaming, Unity, vulkan

One of the most popular video game engines, Unity, has released a beta for Unity 5.6, which will be the last version of Unity 5.x. This release pushes Vulkan into full support on both desktop and mobile, which actually beats Unreal Engine 4 on the desktop side of things. Specifically, Vulkan is available for the Android, Windows, Linux, and Tizen operating systems. Apple users should be happy that this version also updates Metal for iOS and macOS, but Apple is still preventing vendors from shipping Vulkan drivers so you really shouldn’t feel too happy.

unity-logo-rgb.png

At Unity’s Unity 2016 keynote, the company claimed about 30-60% better performance on the new API “out-of-the-box”. I do find this statement slightly odd, though, because Unity doesn’t really provide much access to “the box” without expensive source code up-sells. The most user involvement of the engine internals, for what I would assume is the majority of projects, is buying and activating a plug-in, and Vulkan would be kind-of crappy to hide behind a pay wall.

I mentioned that this will be the last Unity 5.x version. While the difference between a major and a minor version number tends to be just marketing these days, Unity is changing their major version to align with the year that it belongs to. Expect future versions, starting with a beta version in April, to be numbered 2017.x.

Unity 5.6 comes out of beta in March.

Source: Unity
Subject: Systems, Mobile

Vulkan 1.0, OpenGL 4.5, and OpenGL ES 3.2 on a console

A few days ago, sharp eyes across the internet noticed that Nintendo’s Switch console has been added to lists of compliant hardware at The Khronos Group. Vulkan 1.0 was the eye-catcher, although the other tabs also claims conformance with OpenGL 4.5 and OpenGL ES 3.2. The device is not listed as compatible with OpenCL, although that does not really surprise me for a single-GPU gaming system. The other three APIs have compute shaders designed around the needs of game developers. So the Nintendo Switch conforms to the latest standards of the three most important graphics APIs that a gaming device should use -- awesome.

But what about performance?

In other news, Eurogamer / Digital Foundary and VentureBeat uncovered information about the hardware. It will apparently use a Tegra X1, which is based around second-generation Maxwell, that is under-clocked from what we see on the Shield TV. When docked, the GPU will be able to reach 768 MHz on its 256 CUDA cores. When undocked, this will drop to 307.2 MHz (although the system can utilize this mode while docked, too). This puts the performance at ~315 GFLOPs when in mobile, pushing up to ~785 GFLOPs when docked.

You might compare this to the Xbox One, which runs at ~1310 GFLOPs, and the PlayStation 4, which runs at ~1840 GFLOPs. This puts the Nintendo Switch somewhat behind it, although the difference is even greater than that. The FLOP calculation of Sony and Microsoft is 2 x Shader Count x Frequency, but the calculation of Nintendo’s Switch is 4 x Shader Count x Frequency. FMA is the factor of two, but the extra factor of two in Nintendo’s case... ...

Yup, the Switch’s performance rating is calculated as FP16, not FP32.

nintendo-2016-switch-gpu.png

Snippet from an alleged leak of what Nintendo is telling developers.
If true, it's very interesting that FP16 values are being discussed as canonical.

Reducing shader precision down to 16-bit is common for mobile devices. It takes less transistors to store and translate half-precision values, and accumulated error will be muted by the fact that you’re viewing it on a mobile screen. The Switch isn’t always a mobile device, though, so it will be interesting to see how this reduction of lighting and shading precision will affect games on your home TV, especially in titles that don’t follow Nintendo’s art styles. That said, shaders could use 32-bit values, but then you are cutting your performance for those instructions in half, when you are already somewhat behind your competitors.

As for the loss of performance when undocked, it shouldn’t be too much of an issue if Nintendo pressures developers to hit 1080p when docked. If that’s the case, the lower resolution, 720p mobile screen will roughly scale with the difference in clock.

Lastly, there is a bunch of questions surrounding Nintendo’s choice of operating system: basically, all the questions. It’s being developed by Nintendo, but we have no idea what they forked it from. NVIDIA supports the Tegra SoC on both Android and Linux, it would be legal for Nintendo to fork either one, and Nintendo could have just asked for drivers even if NVIDIA didn’t already support the platform in question. Basically, anything is possible from the outside, and I haven’t seen any solid leaks from the inside.

The Nintendo Switch launches in March.

LibRetro Vulkanizes PlayStation

Subject: General Tech | December 4, 2016 - 02:42 PM |
Tagged: pc gaming, vulkan, libretro

About half of a year ago, LibRetro added Vulkan support to their Nintendo 64 renderer. This allowed them to do things like emulate the console’s hardware rasterization in software, and do so as an asynchronous shader, circumventing limitations in their OpenGL path trying to emulate the console’s offbeat GPU.

universal-2016-crashbandicoot.png

Image Credit: Universal Interactive via Wikipedia

They have now turned their sights (“for the lulz”) to the original PlayStation, creating a Vulkan back-end for emulators like Beetle PSX.

The fairly long blog post discusses how the PlayStation is designed in detail, making it an interesting read for anyone curious. One point that I found particularly interesting is how the video memory is configured as a single, 1MB, 2D array (1024x512x16-bit). At this time, texture resolution was quite small, and frame buffers were between 256x224 and 640x480, so that’s a lot of room to make a collage out of your frame and all textures in the scene, but it’s still odd to think about a console imposing such restrictions now that we’re spoiled by modern GPUs.

In terms of performance, the developer claims that modern GPUs can handle 8k resolutions with relative ease, and four-digit FPS at lower resolutions.

Epic Games Releases Unreal Engine 4.14

Subject: General Tech | November 15, 2016 - 06:04 PM |
Tagged: vulkan, ue4, pc gaming, epic games

Every couple of months, Epic Games drops a new version of Unreal Engine 4 with improvements all over. As such, you should check the full release notes to see all of the changes, including the fifty-one that Epic thinks are worth highlighting. Here are some that I think our readers would enjoy, though.

epic-2016-ue414-translucentforward.jpg

First, Vulkan support for mobile devices has apparently moved out of experimental. While this will not be enabled for desktop applications, it's interesting to note that DirectX 12 is still in experimental. Basically, if you squint and put blinders on, you could sort-of see some element of Vulkan beating DirectX 12 to market.

Second, Unreal Engine 4 has significantly upgraded their forward renderer. In a lot of cases, a deferred renderer is preferable because it's fast and consistent; the post-process shader only run once per output pixel, ignoring lighting triangles that are covered by other triangles. The way this is structured, though, makes multisample anti-aliasing impossible, which is slightly annoying on desktop but brutal in VR. As an added benefit, they're also using forward shading to help the deferred renderer with translucent materials.

Unreal Engine typically uses a lot of NVIDIA SDKs. This version updates PhysX up to 3.4, which allows “continuous collision detection” on rigid bodies. This means that fast moving object shouldn't pass through objects without colliding, because the collision occurred between two checks and was missed, if this feature is enabled. They are also adding the Ansel SDK, which allows players to take high-detail screenshots, as a plug-in.

Skipping down the release notes a bunch, Unreal Engine 4.14 also adds support for Visual Studio 15, which is the version after Visual Studio 2015 (Visual Studio 14.0). Both IDEs are, in fact, supported. It's up to the developer to choose which one to use, although Visual Studio 15 makes a lot of improvements regarding install and uninstall.

Finally, at least for my brief overview, Unreal Engine 4.14 begun to refactor their networking system. It sounds like the current optimizations are CPU-focused, but allowing more network-capable objects is always a plus. Epic Games claims they are benchmarking about 40% higher performance in this area.

Source: Epic Games

Phoronix Tests NVIDIA GPUs OpenGL vs Vulkan on Linux

Subject: Graphics Cards | November 5, 2016 - 08:19 PM |
Tagged: linux, DOTA 2, valve, nvidia, vulkan, opengl

Phoronix published interesting benchmark results for OpenGL vs Vulkan on Linux, across a wide spread of thirteen NVIDIA GPUs. Before we begin, the CPU they chose was an 80W Intel Xeon E3-1280 v5, which fits somewhere between the Skylake-based Core i7-6700k and Core i7-6700 (no suffix). You may think that Xeon v5 would be based on Broadwell, but, for some reason, Intel chose the E3-1200 series to be based on Skylake. Regardless, the choice of CPU will come in to play.

They will apparently follow up this article with AMD results.

khronos-vulkan-logo.png

A trend arose throughout the whole article. At 1080p, everything, from the GTX 760 to the GTX 1080, was rendering at ~101 FPS on OpenGL and ~115 FPS on Vulkan. The obvious explanation is that the game is 100% CPU-bound on both APIs, but Vulkan is able to relax the main CPU thread enough to squeeze out about 14% more frames.

The thing is, the Xeon E3-1280 v5 is about as high-end of a mainstream CPU as you can get. It runs the most modern architecture and it can achieve clocks up to 4 GHz on all cores. DOTA 2 can get harsh on the CPU when a lot of units are on screen, but this is a little surprisingly low. Then again, I don't have any experience running DOTA 2 benchmarks, so maybe it's a known thing, or maybe even a Linux-version thing?

Moving on, running the game at 4K, the results get more interesting. In GPU-bound scenarios, NVIDIA's driver shows a fairly high performance gain on OpenGL. Basically all GPUs up to the GTX 1060 run at a higher frame rate in OpenGL, only switching to Vulkan with the GTX 1070 and GTX 1080, where OpenGL hits that 101 FPS ceiling and Vulkan goes a little above.

Again, it will be interesting to see how AMD fairs against this line of products, both in Vulkan and OpenGL. Those will apparently come “soon”.

Source: Phoronix

Feral Interactive Plans Vulkan Ports in 1st Half of 2017

Subject: General Tech | October 31, 2016 - 07:12 PM |
Tagged: feral interactive, pc gaming, vulkan, linux

Beginning in the first half of next year, Feral Interactive plans to release software running on the Vulkan API. Feral is one of the three well known Linux port developers, the other two being Aspyr Media and an independent contractor, Ryan C. Gordon, who convert Windows games under some deal with the original creators.

feral-2016-logo.png

They didn't claim which game would be first. Deus Ex: Mankind Divided will be initially released on OpenGL, but people are speculating that, since its rendering back-end is set up to efficiently queue DirectX 12 tasks, which is the same basic structure that Vulkan uses, Feral might release a patch to it later. Alternatively, they could have another title in the works, although I cannot think of anything short of DOOM that would fit the bill, and there has been nothing from Bethesda, id, or Feral to suggest that is leaving Windows. Maybe Tomb Raider?

Whatever it is, we're beginning to see more than just engine developers port software to the new graphics APIs, and on multiple platforms, too.