Manufacturer: PC Perspective

Caught Up to DirectX 12 in a Single Day

The wait for Vulkan is over.

I'm not just talking about the specification. Members of the Khronos Group have also released compatible drivers, SDKs and tools to support them, conformance tests, and a proof-of-concept patch for Croteam's The Talos Principle. To reiterate, this is not a soft launch. The API, and its entire ecosystem, is out and ready for the public on Windows (at least 7+ at launch but a surprise Vista or XP announcement is technically possible) and several distributions of Linux. Google will provide an Android SDK in the near future.

khronos-2016-vulkan-why.png

I'm going to editorialize for the next two paragraphs. There was a concern that Vulkan would be too late. The thing is, as of today, Vulkan is now just as mature as DirectX 12. Of course, that could change at a moment's notice; we still don't know how the two APIs are being adopted behind the scenes. A few DirectX 12 titles are planned to launch in a few months, but no full, non-experimental, non-early access game currently exists. Each time I say this, someone links the Wikipedia list of DirectX 12 games. If you look at each entry, though, you'll see that all of them are either: early access, awaiting an unreleased DirectX 12 patch, or using a third-party engine (like Unreal Engine 4) that only list DirectX 12 as an experimental preview. No full, released, non-experimental DirectX 12 game exists today. Besides, if the latter counts, then you'll need to accept The Talos Principle's proof-of-concept patch, too.

But again, that could change. While today's launch speaks well to the Khronos Group and the API itself, it still needs to be adopted by third party engines, middleware, and software. These partners could, like the Khronos Group before today, be privately supporting Vulkan with the intent to flood out announcements; we won't know until they do... or don't. With the support of popular engines and frameworks, dependent software really just needs to enable it. This has not happened for DirectX 12 yet, and, now, there doesn't seem to be anything keeping it from happening for Vulkan at any moment. With the Game Developers Conference just a month away, we should soon find out.

khronos-2016-vulkan-drivers.png

But back to the announcement.

Vulkan-compatible drivers are launching today across multiple vendors and platforms, but I do not have a complete list. On Windows, I was told to expect drivers from NVIDIA for Windows 7, 8.x, 10 on Kepler and Maxwell GPUs. The standard is compatible with Fermi GPUs, but NVIDIA does not plan on supporting the API for those users due to its low market share. That said, they are paying attention to user feedback and they are not ruling it out, which probably means that they are keeping an open mind in case some piece of software gets popular and depends upon Vulkan. I have not heard from AMD or Intel about Vulkan drivers as of this writing, one way or the other. They could even arrive day one.

On Linux, NVIDIA, Intel, and Imagination Technologies have submitted conformant drivers.

Drivers alone do not make a hard launch, though. SDKs and tools have also arrived, including the LunarG SDK for Windows and Linux. LunarG is a company co-founded by Lens Owen, who had a previous graphics software company that was purchased by VMware. LunarG is backed by Valve, who also backed Vulkan in several other ways. The LunarG SDK helps developers validate their code, inspect what the API is doing, and otherwise debug. Even better, it is also open source, which means that the community can rapidly enhance it, even though it's in a releasable state as it is. RenderDoc, the open-source graphics debugger by Crytek, will also add Vulkan support. ((Update (Feb 16 @ 12:39pm EST): Baldur Karlsson has just emailed me to let me know that it was a personal project at Crytek, not a Crytek project in general, and their GitHub page is much more up-to-date than the linked site.))

vulkan_gltransition_maintenance1.png

The major downside is that Vulkan (like Mantle and DX12) isn't simple.
These APIs are verbose and very different from previous ones, which requires more effort.

Image Credit: NVIDIA

There really isn't much to say about the Vulkan launch beyond this. What graphics APIs really try to accomplish is standardizing signals that enter and leave video cards, such that the GPUs know what to do with them. For the last two decades, we've settled on an arbitrary, single, global object that you attach buffers of data to, in specific formats, and call one of a half-dozen functions to send it.

Compute APIs, like CUDA and OpenCL, decided it was more efficient to handle queues, allowing the application to write commands and send them wherever they need to go. Multiple threads can write commands, and multiple accelerators (GPUs in our case) can be targeted individually. Vulkan, like Mantle and DirectX 12, takes this metaphor and adds graphics-specific instructions to it. Moreover, GPUs can schedule memory, compute, and graphics instructions at the same time, as long as the graphics task has leftover compute and memory resources, and / or the compute task has leftover memory resources.

This is not necessarily a “better” way to do graphics programming... it's different. That said, it has the potential to be much more efficient when dealing with lots of simple tasks that are sent from multiple CPU threads, especially to multiple GPUs (which currently require the driver to figure out how to convert draw calls into separate workloads -- leading to simplifications like mirrored memory and splitting workload by neighboring frames). Lots of tasks aligns well with video games, especially ones with lots of simple objects, like strategy games, shooters with lots of debris, or any game with large crowds of people. As it becomes ubiquitous, we'll see this bottleneck disappear and games will not need to be designed around these limitations. It might even be used for drawing with cross-platform 2D APIs, like Qt or even webpages, although those two examples (especially the Web) each have other, higher-priority bottlenecks. There are also other benefits to Vulkan.

khronos-2016-vulkan-middleware.png

The WebGL comparison is probably not as common knowledge as Khronos Group believes.
Still, Khronos Group was criticized when WebGL launched as "it was too tough for Web developers".
It didn't need to be easy. Frameworks arrived and simplified everything. It's now ubiquitous.
In fact, Adobe Animate CC (the successor to Flash Pro) is now a WebGL editor (experimentally).

Open platforms are required for this to become commonplace. Engines will probably target several APIs from their internal management APIs, but you can't target users who don't fit in any bucket. Vulkan brings this capability to basically any platform, as long as it has a compute-capable GPU and a driver developer who cares.

Thankfully, it arrived before any competitor established market share.

Phoronix Tests Almost a Decade of GPUs

Subject: Graphics Cards | January 20, 2016 - 08:26 PM |
Tagged: nvidia, linux, tesla, fermi, kepler, maxwell

It's nice to see long-term roundups every once in a while. They do not really provide useful information for someone looking to make a purchase, but they show how our industry is changing (or not). In this case, Phoronix tested twenty-seven NVIDIA GeForce cards across four architectures: Tesla, Fermi, Kepler, and Maxwell. In other words, from the GeForce 8 series all the way up to the GTX 980 Ti.

phoronix-2016-many-nvidia-roundup.jpg

Image Credit: Phoronix

Nine years of advancements in ASIC design, with a doubling time-step of 18 months, should yield a 64-fold improvement. The number of transistors falls short, showing about a 12-fold improvement between the Titan X and the largest first-wave Tesla, although that means nothing for a fabless semiconductor designer. The main reason why I include this figure is to show the actual Moore's Law trend over this time span, but it also highlights the slowdown in process technology.

Performance per watt does depend on NVIDIA though, and the ratio between the GTX 980 Ti and the 8500 GT is about 72:1. While this is slightly better than the target 64:1 ratio, these parts are from very different locations in their respective product stacks. Swapping the 8500 GT for the following year's 9800 GTX, which leads to a comparison between top-of-the-line GPUs of their respective times, and you see a 6.2x improvement in performance per watt versus the GTX 980 Ti. On the other hand, that part was outstanding for its era.

I should note that each of these tests take place on Linux. It might not perfectly reflect the landscape on Windows, but again, it's interesting in its own right.

Source: Phoronix

Intel Pushes Device IDs of Kaby Lake GPUs

Subject: Graphics Cards, Processors | January 8, 2016 - 07:38 AM |
Tagged: Intel, kaby lake, linux, mesa

Quick post about something that came to light over at Phoronix. Someone noticed that Intel published a handful of PCI device IDs for graphics processors to Mesa and libdrm. It will take a few months for graphics drivers to catch up, although this suggests that Kaby Lake will be releasing relatively soon.

intel-2015-linux-driver-mesa.png

It also gives us hints about what Kaby Lake will be. Of the published batch, there will be six tiers of performance: GT1 has five IDs, GT1.5 has three IDs, GT2 has six IDs, GT2F has one ID, GT3 has three IDs, and GT4 has four IDs. Adding them up, we see that Intel plans 22 GPU devices. The Phoronix post lists what those device IDs are, but that is probably not interesting for our readers. Whether some of those devices overlap in performance or numbering is unclear, but it would make sense given how few SKUs Intel usually provides. I have zero experience in GPU driver development.

Source: Phoronix

Tessellation Support Expands for Intel's Open Linux Driver

Subject: Graphics Cards | December 29, 2015 - 12:05 PM |
Tagged: opengl, mesa, linux, Intel

The open-source driver for Intel is known to be a little behind on Linux. Because Intel does not provide as much support as they should, the driver still does not support OpenGL 4.0, although that is changing. One large chunk of that API is support for tessellation, which comes from DirectX 11, and recent patches are adding it for supported hardware. Proprietary drivers exist, at least for some platforms, but they have their own issues.

intel-2015-linux-driver-mesa.png

According to the Phoronix article, once the driver succeeds in supporting OpenGL 4.0, it will not be too long to open the path to 4.2. Tessellation is a huge hurdle, partially because it involves adding two whole shading stages to the rendering pipeline. Broadwell GPUs were recently added, but a patch that was committed yesterday will expand that to Ivy Bridge and Haswell. On Windows, Intel is far ahead -- pushing OpenGL 4.4 for Skylake-based graphics, although that platform only has proprietary drivers. AMD and NVIDIA are up to OpenGL 4.5, which is the latest version.

While all of this is happening, Valve is working on an open-source Vulkan driver for Intel on Linux. This API will be released adjacent to OpenGL, and is built for high-performance graphics and compute. (Note that OpenCL is more sophisticated than Vulkan "1.0" will be on the compute side of things.) As nice as it would be to get high-end OpenGL support, especially for developers who want a more simplified structure to communicate to GPUs with, Vulkan will probably be the API that matters most for high-end video games. But again, that only applies to games that are developed for it.

Source: Phoronix

Running Intel HD 530 graphics under Linux

Subject: Processors | November 12, 2015 - 06:22 PM |
Tagged: linux, Skylake, Intel, i5-6600K, hd 530, Ubuntu 15.10

A great way to shave money off of a minimalist system is to skip buying a GPU and using the one present on modern processors, as well as installing Linux instead of buying a Windows license.  The problem with doing so is that playing demanding games is going to be beyond your computers ability, at least without turning off most of the features that make the game look good.  To help you figure out what your machine would be capable of is this article from Phoronix.  Their tests show that Windows 10 currently has a very large performance lead compared to the same hardware running on Ubuntu as the Windows OpenGL driver is superior to the open-source Linux driver.  This may change sooner rather than later but you should be aware that for now you will not get the most out of your Skylakes GPU on Linux at this time.

index.jpg

"As it's been a while since my last Windows vs. Linux graphics comparison and haven't yet done such a comparison for Intel's latest-generation Skylake HD Graphics, the past few days I was running Windows 10 Pro x64 versus Ubuntu 15.10 graphics benchmarks with a Core i5 6600K sporting HD Graphics 530."

Here are some more Processor articles from around the web:

Processors

 

Source: Phoronix

Are NVIDIA and AMD ready for SteamOS?

Subject: Graphics Cards | October 23, 2015 - 07:19 PM |
Tagged: linux, amd, nvidia, steam os

Steam Machines powered by SteamOS are due to hit stores in the coming months and in order to get the best performance you need to make sure that the GPU inside the machine plays nicely with the new OS.  To that end Phoronix has tested 22 GPUs, 15 NVIDIA ranging from a GTX 460 straight through to a TITAN X and seven AMD cards from an HD 6570 through to the new R9 Fury.  Part of the reason they used less AMD cards in the testing stems from driver issues which prevented some models from functioning properly.  They tested Bioshock Infinite, both Metro 2033 games, CS:GO and one of Josh's favourites, DiRT Showdown.  The performance results may not be what you expect and are worth checking out fully.  As well Phoronix put in cost to performance findings, for budget conscious gamers.

image.php_.jpg

"With Steam Machines set to begin shipping next month and SteamOS beginning to interest more gamers as an alternative to Windows for building a living room gaming PC, in this article I've carried out a twenty-two graphics card comparison with various NVIDIA GeForce and AMD Radeon GPUs while testing them on the Debian Linux-based SteamOS 2.0 "Brewmaster" operating system using a variety of Steam Linux games."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: Phoronix

Thinking of building a server farm?

Subject: Systems | October 13, 2015 - 07:23 PM |
Tagged: server farm, linux, DIY

Phoronix recently built a server farm and bar, a perfect use for a basement.  In building the server farm they learned quite a bit about the process of creating your own server farm as well as the costs involved.  For instance their power bill has gone up somewhat, including the air conditioning they are seeing usage of 3,000 kWh a month so you might want to do some calculations before setting up your own.  Take a look at how the mostly finished design worked out and if you are interested you can find a link to the original article covering the build on the last page. 

image.php_.jpg

"It's been just over six months since I completed construction on the large 60+ system server room where a ton of Linux benchmarking takes place just not for Phoronix.com but also the new LinuxBenchmarking.com daily performance tracking initiative and testing and development around our Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org software. Here's a look back, a few recommendations to reiterate for those aspiring to turn their cellar into a server farm, and a few things I'd do differently next time around."

Here are some more Systems articles from around the web:

Systems

 

Source: Phoronix

Linux turns 24 in time for the party in Dublin

Subject: General Tech | October 5, 2015 - 05:14 PM |
Tagged: LinuxCon Europe, linux, open source

LinuxCon Europe has just kicked off and there are some interesting projects being discussed at the event.  ARM, Cisco, NexB, Qualcomm, SanDisk and Wind River have formed the Openchain workgroup to bring some standardization to Linux software development, such as exists in Debian, to ensure that multiple companies are not attempting design their own wheels simultaneously.  The Real-Time Linux Collaborative Project is developing software for application in robotics, telecom, and aviation and includes members such as Google, Texas Instruments, Intel, ARM and Altera.  They will be working towards developing Linux applications for those industries where shaving a few milliseconds off of transaction times can be worth millions of dollars.  The last major project announced at the convention will be FOSSology 3.0 which will enable you quickly and easily run licence and copyright scans, something near and dear to the heart of the Free and Open Source Software community.  Check out more at The Inquirer.

spdx_logo.png

"Tim Zemlin, chief executive of the Foundation, said in his opening remarks that this year's opening day falls on the 24th anniversary of Linux itself and the 30th of the Free Software Foundation, giving credit to delegates for their part in the success of both."

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Inquirer

Phoronix Looks at NVIDIA's Linux Driver Quality Settings

Subject: Graphics Cards | September 23, 2015 - 01:09 AM |
Tagged: nvidia, linux, graphics drivers

In the NVIDIA driver control panel, there is a slider that controls Performance vs Quality. On Windows, I leave it set to “Let the 3D application decide” and change my 3D settings individually, as needed. I haven't used NVIDIA's control panel on Linux too much, mostly because my laptop is what I usually install Linux on, which runs an AMD GPU, but the UI seems to put a little more weight on it.

nvidia-geforce.png

Or is that GTux?

Phoronix decided to test how each of these settings affects a few titles, and the only benchmark they bothered reporting is Team Fortress 2. It turns out that other titles see basically zero variance. TF2 saw a difference of 6FPS though, from 115 FPS at High Quality to 121 FPS at Quality. Oddly enough, Performance and High Performance were worse performance than Quality.

To me, this sounds like NVIDIA has basically forgot about the feature. It barely affects any title, the game it changes anything measureable in is from 2007, and it contradicts what the company is doing on other platforms. I predict that Quality is the default, which is the same as Windows (albeit with only 3 choices: “Performance”, “Balanced”, and the default “Quality”). If it is, you probably should just leave it there 24/7 in case NVIDIA has literally not thought about tweaking the other settings. On Windows, it is kind-of redundant with GeForce Experience, anyway.

Final note: Phoronix has only tested the GTX 980. Results may vary elsewhere, but probably don't.

Source: Phoronix

Who would have guessed? Microsoft's Cloud has a Linux lining

Subject: General Tech | September 22, 2015 - 05:06 PM |
Tagged: azure, microsoft, linux

It is a strange new world we find ourselves, where part of Microsoft's Azure infrastructure will be built on Linux.  Azure Cloud Switch will allow software-defined networking to be used on Azure for those who are brave enough to dabble in SDN.  Microsoft will be incorporating the OpenCompute developed Switch Abstraction Interface based on Linux, as The Register points out this is likely due to a lack of similar functionality in Windows software.  In this particular case Microsoft will not be reinventing the wheel but will wisely focus on improving the functionality of Azure and Azure based products such as Office 365 which they have developed in house.  The 'cloud' is a strange place and it just got a little bit stranger.

windows azure.png

"Redmond's revealed that it's built something called Azure Cloud Switch (ACS), describing it as “a cross-platform modular operating system for data center networking built on Linux” and “our foray into building our own software for running network devices like switches.”"

Here is some more Tech News from around the web:

Tech Talk

 

Source: The Register