Podcast #464 - Vega Redux, Intel 8th Gen Core, and more!

Subject: General Tech | August 24, 2017 - 11:24 AM |
Tagged: vulkan, vlan, video, samsung galaxy note 8, rx vega, podcast, Linksys WRT32x, kaby lake, Intel, ice lake, htc vive, ECS, Core, asus zenphone 4, acer predator z271t

PC Perspective Podcast #464 - 08/17/17

Join us for continued discussion on RX Vega, Intel 8th Gen Core, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano

Peanut Gallery: Ken Addison, Alex Lustenberg

Program length: 1:34:56

Podcast topics of discussion:
  1. Week in Review:
    1. 0:07:54 Let’s talk about RX Vega!
      1. My Review
      2. Pricing concerns
      3. Availability
      4. Different die packages
      5. Locked BIOS
  2. News items of interest:
  3. Hardware/Software Picks of the Week
    1. 1:26:45 Allyn: Razer Blade
  4. Closing/outro
 

Source:

Ashes of the Singularity: Escalation Vulkan Support Soon

Subject: General Tech | August 19, 2017 - 10:00 PM |
Tagged: pc gaming, oxide, Oxide Games, vulkan

Oxide Games has been mentioned all throughout the development of the next-generation graphics APIs, DirectX 12, Mantle, and Vulkan. Their Star Swarm “stress test” was one of the first practical examples of a game that desperately needs to make a lot of draw calls. Also, their rendering algorithm is very different from the other popular game engines, where lighting is performed on the object rather than the screen, which the new APIs help out with.

stardock-2016-ashes-logo.png

Currently, Ashes of the Singularity supports DirectX 11 and DirectX 12, but Vulkan will be added soon. Oxide will be pushing the new graphics api in the 2.4 update, bringing increased CPU performance to all OSes but especially Windows 7 and 8 (neither of which support DirectX 12), and a free DLC pack that contains nine co-op maps. They also plan to continue optimizing Ashes of the Singularity for Vulkan in the future.

All of this will be available on Thursday, August 24th.

Vulkan Renderer for Doom 3 BFG Source Code Published

Subject: General Tech | August 17, 2017 - 09:41 PM |
Tagged: id software, vulkan, doom, Doom 3

Over the last few days, Dustin Land of id Software has been publishing commits to his vkDOOM3 GitHub repository. This project, as the name suggests, adds a Vulkan-based renderer to the game, although it’s not really designed to replace the default OpenGL implementation. Instead, the project is a learning resource, showing how a full application handles the API.

id-logo.jpg

This is quite interesting for me. While code samples can show you how a chunk of code is used in rough isolation, it’s sometimes good to see how it’s used in a broader context. For instance, when I was learning Unreal Engine 4, I occasionally searched into the Unreal Tournament repository for whatever I was learning about. Sometimes, things just don’t “click” until you see the context, especially when your question starts with “why”.

If you’re interested, check out the GitHub repo. You will need to own Doom 3 BFG Edition to actually play it, though.

Source: GitHub

Report: Wolfenstein 2 Optimized for AMD Vega with FP16 Shader Support

Subject: Graphics Cards | August 1, 2017 - 12:05 PM |
Tagged: Wolfenstein 2, vulkan, Vega, id Tech 6, id software, half-precision, game engine, FP16, amd

According to a report from Golem.de (German language), with the upcoming Wolfenstein 2: The New Colossus game AMD Vega owners will have the advantage of FP16 shader support from a new version of the id Tech 6 engine. The game supports both DX12 and the Vulkan API, but the use of half-precision calculations - the scope of which has not been specified - will potentially offer higher frame-rates for AMD Vega users.

vega-26.jpg

AMD provided some technical details about Wolfenstein 2 during their Threadripper/Vega tech day, and this new game includes “special optimizations” in the id Tech 6 game engine for AMD Vega hardware:

“For what exactly id Software (is using) FP16 instead of FP32, AMD did not say. These could post-processing effects, such as bloom. The performance should increase in the double-digit percentage range, (though) id Software did not want to comment on it.” (Translated from German.)

Source: Golem.de

Qt Outlines What Their View on Vulkan Support Is

Subject: General Tech | June 7, 2017 - 09:10 PM |
Tagged: Qt, vulkan

During our recent interview, the Khronos Group mentioned that one reason to merge into Vulkan was because, at first, the OpenCL working group wasn’t sure whether they wanted an explicit, low-level API, or an easy-to-use one that hides the complexity. Vulkan taught them to take a very low-level position, because there can always be another layer above them that hides complexity to everything downstream of it. This is important for them, because the only layers below them are owned by OS and hardware vendors.

QtCompany_logo_1200x630.png

This post is about Qt, though. Qt is a UI middleware, written in C++, that has become very popular as of late. The big revamp of AMD’s control panel with Crimson Edition was a result of switching from .NET to Qt, which greatly sped up launch time. They announced their intent to support the Vulkan API on the very day that it launched.

Yesterday, they wrote a blog post detailing their intentions for Vulkan support in Qt 5.10.

First and foremost, their last bulletpoint claims that these stances can change as the middleware evolves, particularly with Qt Quick, Qt 3D, Qt Canvas 3D, QPainter, and similar classes. This is a discussion of their support for Qt 5.10 specifically. As it stands, though, Qt intends to focus on cross-platform, window management, and “function resolving for the core API”. The application is expected to manage the rest of the Vulkan API itself (or, of course, use another helper for the other parts).

This makes sense for Qt’s position. Their lowest level classes should do as little as possible outside of what their developers expect, allowing higher-level libraries the most leeway to fill in the gaps. Qt does have higher-level classes, though, and I’m curious what others, especially developers, believe Qt should do with those to take advantage of Vulkan. Especially when we start getting into WYSIWYG editors, like Qt 3D Studio, there is room to do more.

Obviously, the first release isn’t the place to do it, but I’m curious none-the-less.

Source: Qt

Dawn of War III Vulkan Support on Linux to Add Intel GPUs

Subject: General Tech | June 7, 2017 - 04:54 PM |
Tagged: pc gaming, linux, vulkan, Intel, mesa, feral interactive

According to Phoronix, Alex Smith of Feral Interactive has just published a few changes to the open source Intel graphics driver, which allows their upcoming Dawn of War III port for Linux to render correctly on Vulkan. This means that the open-source Intel driver should support the game on day one, although drawing correctly and drawing efficiently could be two very different things -- or maybe not, we’ll see.

feral-2017-dawnofwar3.png

It’s interesting seeing things go in the other direction. Normally, graphics engineers parachute in to high-end developers and help them make the most of their software for each respective, proprietary graphics driver. In this case, we’re seeing the game studios pushing fixes to the graphics vendors, because that’s how open source rolls. It will be interesting to do a pros and cons comparison of each system one day, especially if cross-pollination results from it.

Source: Phoronix
Subject: General Tech
Manufacturer: The Khronos Group

An Data Format for Whole 3D Scenes

The Khronos Group has finalized the glTF 2.0 specification, and they recommend that interested parties integrate this 3D scene format into their content pipeline starting now. It’s ready.

khronos-2017-glTF_500px_June16.png

glTF is a format to deliver 3D content, especially full scenes, in a compact and quick-loading data structure. These features differentiate glTF from other 3D formats, like Autodesk’s FBX and even the Khronos Group’s Collada, which are more like intermediate formats between tools, such as 3D editing software (ex: Maya and Blender) and game engines. They don’t see a competing format for final scenes that are designed to be ingested directly, quick and small.

glTF 2.0 makes several important changes.

The previous version of glTF was based on a defined GLSL material, which limited how it could be used, although it did align with WebGL at the time (and that spurred some early adoption). The new version switches to Physically Based Rendering (PBR) workflows to define their materials, which has a few advantages.

khronos-2017-PBR material model in glTF 2.0.jpg

First, PBR can represent a wide range of materials with just a handful of parameters. Rather than dictating a specific shader, the data structure can just... structure the data. The industry has settled on two main workflows, metallic-roughness and specular-gloss, and glTF 2.0 supports them both. (Metallic-roughness is the core workflow, but specular-gloss is provided as an extension, and they can be used together in the same scene. Also, during the briefing, I noticed that transparency was not explicitly mentioned in the slide deck, but the Khronos Group confirmed that it is stored as the alpha channel of the base color, and thus supported.) Because the format is now based on existing workflows, the implementation can be programmed in OpenGL, Vulkan, DirectX, Metal, or even something like a software renderer. In fact, Microsoft was a specification editor on glTF 2.0, and they have publicly announced using the format in their upcoming products.

The original GLSL material, from glTF 1.0, is available as an extension (for backward compatibility).

A second advantage of PBR is that it is lighting-independent. When you define a PBR material for an object, it can be placed in any environment and it will behave as expected. Noticeable, albeit extreme examples of where this would have been useful are the outdoor scenes of Doom 3, and the indoor scenes of Battlefield 2. It also simplifies asset creation. Some applications, like Substance Painter and Quixel, have artists stencil materials onto their geometry, like gold, rusted iron, and scuffed plastic, and automatically generate the appropriate textures. It also aligns well with deferred rendering, see below, which performs lighting as a post-process step and thus skip pixels (fragments) that are overwritten.

epicgames-2017-suntempledeferred.png

PBR Deferred Buffers in Unreal Engine 4 Sun Temple.
Lighting is applied to these completed buffers, not every fragment.

glTF 2.0 also improves support for complex animations by adding morph targets. Most 3D animations, beyond just moving, rotating, and scaling whole objects, are based on skeletal animations. This method works by binding vertexes to bones, and moving, rotating, and scaling a hierarchy of joints. This works well for humans, animals, hinges, and other collections of joints and sockets, and it was already supported in glTF 1.0. Morph targets, on the other hand, allow the artist to directly control individual vertices between defined states. This is often demonstrated with a facial animation, interpolating between smiles and frowns, but, in an actual game, this is often approximated with skeletal animations (for performance reasons). Regardless, glTF 2.0 now supports morph targets, too, letting the artists make the choice that best suits their content.

Speaking of performance, the Khronos Group is also promoting “enhanced performance” as a benefit of glTF 2.0. I asked whether they have anything to elaborate on, and they responded with a little story. While glTF 1.0 validators were being created, one of the engineers compiled a list of design choices that would lead to minor performance issues. The fixes for these were originally supposed to be embodied in a glTF 1.1 specification, but PBR workflows and Microsoft’s request to abstract the format away from GLSL lead to glTF 2.0, which is where the performance optimization finally ended up. Basically, there wasn’t just one or two changes that made a big impact; it was the result of many tiny changes that add up.

Also, the binary version of glTF is now a core feature in glTF 2.0.

khronos-2017-gltfroadmap.png

The slide looks at the potential future of glTF, after 2.0.

Looking forward, the Khronos Group has a few items on their glTF roadmap. These did not make glTF 2.0, but they are current topics for future versions. One potential addition is mesh compression, via the Google Draco team, to further decrease file size of 3D geometry. Another roadmap entry is progressive geometry streaming, via Fraunhofer SRC, which should speed up runtime performance.

Yet another roadmap entry is “Unified Compression Texture Format for Transmission”, specifically Basis by Binomial, for texture compression that remains as small as possible on the GPU. Graphics processors can only natively operate on a handful of formats, like DXT and ASTC, so textures need to be converted when they are loaded by an engine. Often, when a texture is loaded at runtime (rather than imported by the editor) it will be decompressed and left in that state on the GPU. Some engines, like Unity, have a runtime compress method that converts textures to DXT, but the developer needs to explicitly call it and the documentation says it’s lower quality than the algorithm used by the editor (although I haven’t tested this). Suffices to say, having a format that can circumvent all of that would be nice.

Again, if you’re interested in adding glTF 2.0 to your content pipeline, then get started. It’s ready. Microsoft is doing it, too.

Imagination PowerVR Ray Tracing with UE4 & Vulkan Demo

Subject: Graphics Cards, Mobile | June 2, 2017 - 02:23 AM |
Tagged: Imagination Technologies, PowerVR, ray tracing, ue4, vulkan

Imagination Technologies has published another video that demonstrates ray tracing with their PowerVR Wizard GPU. The test system, today, is a development card that is running on Ubuntu, and powering Unreal Engine 4. Specifically, it is using UE4’s Vulkan renderer.

The demo highlights two major advantages of ray traced images. The first is that, rather than applying a baked cubemap with screen-space reflections to simulate metallic objects, this demo calculates reflections with secondary rays. From there, it’s just a matter of hooking up the gathered information into the parameters that the shader requires and doing the calculations.

The second advantage is that it can do arbitrary lens effects, like distortion and equirectangular, 360 projections. Rasterization, which projects 3D world coordinates into 2D coordinates on a screen, assumes that edges are still straight, and that causes problems as FoV gets very large, especially full circle. Imagination Technologies acknowledges that workarounds exist, like breaking up the render into six faces of a cube, but the best approximation is casting a ray per pixel and seeing what it hits.

The demo was originally for GDC 2017, back in February, but the videos have just been released.

Podcast #451 - New Surface Pro, Analog Keyboards, Water Cooled PSUs and more!

Subject: General Tech | May 25, 2017 - 11:12 AM |
Tagged: vulkan, video, Surface Pro, SolidScale, seasonic, ps4 pro, podcast, opencl, micon, macbook pro, Khronos, fsp, Eisbaer, Chromebook, Alphacool, aimpad

PC Perspective Podcast #451 - 05/25/17

Join us for talk about the wew Surface Pro, analog keyboards, water cooled PSUs and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano

Peanut Gallery: Alex Lustenberg, Jim Tanous, Ken Addison

Program length: 1:39:25

Podcast topics of discussion:

  1. Week in Review:
  2. Casper!
  3. News items of interest:
  4. Hardware/Software Picks of the Week
  5. Closing/outro

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

Manufacturer: The Khronos Group

The Right People to Interview

Last week, we reported that OpenCL’s roadmap would be merging into Vulkan, and OpenCL would, starting at some unspecified time in the future, be based “on an extended version of the Vulkan API”. This was based on quotes from several emails between myself and the Khronos Group.

Since that post, I had the opportunity to have a phone interview with Neil Trevett, president of the Khronos Group and chairman of the OpenCL working group, and Tom Olson, chairman of the Vulkan working group. We spent a little over a half hour going over Neil’s International Workshop on OpenCL (IWOCL) presentation, discussing the decision, and answering a few lingering questions. This post will present the results of that conference call in a clean, readable way.

khronos-officiallogo-Vulkan_500px_Dec16.png

First and foremost, while OpenCL is planning to merge into the Vulkan API, the Khronos Group wants to make it clear that “all of the merging” is coming from the OpenCL working group. The Vulkan API roadmap is not affected by this decision. Of course, the Vulkan working group will be able to take advantage of technologies that are dropping into their lap, but those discussions have not even begun yet.

Neil: Vulkan has its mission and its roadmap, and it’s going ahead on that. OpenCL is doing all of the merging. We’re kind-of coming in to head in the Vulkan direction.

Does that mean, in the future, that there’s a bigger wealth of opportunity to figure out how we can take advantage of all this kind of mutual work? The answer is yes, but we haven’t started those discussions yet. I’m actually excited to have those discussions, and are many people, but that’s a clarity. We haven’t started yet on how Vulkan, itself, is changed (if at all) by this. So that’s kind-of the clarity that I think is important for everyone out there trying to understand what’s going on.

Tom also prepared an opening statement. It’s not as easy to abbreviate, so it’s here unabridged.

Tom: I think that’s fair. From the Vulkan point of view, the way the working group thinks about this is that Vulkan is an abstract machine, or at least there’s an abstract machine underlying it. We have a programming language for it, called SPIR-V, and we have an interface controlling it, called the API. And that machine, in its full glory… it’s a GPU, basically, and it’s got lots of graphics functionality. But you don’t have to use that. And the API and the programming language are very general. And you can build lots of things with them. So it’s great, from our point of view, that the OpenCL group, with their special expertise, can use that and leverage that. That’s terrific, and we’re fully behind it, and we’ll help them all we can. We do have our own constituency to serve, which is the high-performance game developer first and foremost, and we are going to continue to serve them as our main mission.

So we’re not changing our roadmap so much as trying to make sure we’re a good platform for other functionality to be built on.

Neil then went on to mention that the decision to merge OpenCL’s roadmap into the Vulkan API took place only a couple of weeks ago. The purpose of the press release was to reach OpenCL developers and get their feedback. According to him, they did a show of hands at the conference, with a room full of a hundred OpenCL developers, and no-one was against moving to the Vulkan API. This gives them confidence that developers will accept the decision, and that their needs will be served by it.

Next up is the why. Read on for more.