Subject: General Tech | November 15, 2017 - 02:12 PM | Jeremy Hellstrom
Tagged: gaming, Wolfenstein 2, vulkan, amd, nvidia
[H]ard|OCP took a close look at the new Wolfenstein game, covering the new graphics options which appear in the menus as well as the bugs that could be caused by then, not to mention the benchmarking. For this Vulkan game they chose three AMD cards and four NVIDIA cards to test with a variety of thsoe options enabled as well as looking at the effect resolution has on your performance. As we have seen in other recent games, AMD's Vega 64 is a strong contender at 4K resolutions, surpassing the GTX 1080 but not quite matching its 1080 Ti brother. It is also worth noting this game loves VRAM, in fact 8GB is not enough for Uber settings. Read through the full review for performance numbers as well as insight into the best graphics settings to chose.
"Wolfenstein II: The New Colossus is out; this new game uses the id Tech 6 game engine and Vulkan API to give you a great gaming experience on the PC with today’s latest GPUs. We will compare performance features, see what settings work best, find what is playable in the game and compare performance among several video cards."
Here is some more Tech News from around the web:
- Battletech’s campaign mode is a robot Dark Ages @ Rock, Paper, SHOTGUN
- Doom definitely works on the Switch, but it looks noticeably worse @ Ars Technica
- Need For Speed Payback is really very terrible indeed @ Rock, Paper, SHOTGUN
- Star Wars Battlefront II PC graphics performance analysis @ Guru of 3D
- Wolfenstein 2 story DLC dated, detailed, silly-named @ Rock, Paper, SHOTGUN
- Assassin's Creed Origins: How Heavy Is It on Your CPU? @ Techspot
- Fresh cyber-hell awaits in new System Shock remake vid @ Rock, Paper, SHOTGUN
Subject: General Tech | November 8, 2017 - 03:26 PM | Jeremy Hellstrom
Tagged: gaming, Wolfenstein 2, the new colossus, nvidia, amd, vulkan
Wolfenstein II The New Colossus uses the Vulkan API which could favour AMD's offerings however NVIDIA have vastly improved their support so a win is not guaranteed. The Guru of 3D tested the three resolutions which most people are interested in, 1080p, 1440p and 4K on 20 different GPUs in total. They also took a look at the impact of 4-core versus 8-core CPUs, testing the i7-4790K, i7-5960K as well as the Ryzen 7 1800X and even explored the amount of VRAM the game uses. Drop by to see all their results as well as hints on dealing with the current bugs.
"We'll have a peek at the PC release of Wolfenstein II The New Colossus for Windows relative towards graphics card performance. The game is 100% driven by the Vulkan API. in this test twenty graphics cards are being tested and benchmarked."
Here is some more Tech News from around the web:
- Stellaris FTL changes are in the warp pipes @ Rock, Paper, SHOTGUN
- Humble Strategy Simulator Bundle
- Nearly a year later, video game voice actors end their strike @ Ars Technica
- Call of Duty WWII: Benchmark Performance Analysis @ TechPowerUp
- Take Two: All future games will feature microtransactions @ HEXUS
- Wot I Think: Call of Duty: WW2 Multiplayer @ Rock, Paper, SHOTGUN
- Call of Duty: WW2: PC graphics analysis benchmark @ Guru of 3D
- Total War: Rome II expanding again with Empire Divided @ Rock, Paper, SHOTGUN
- F1 2017 On Linux With 23 Graphics Cards Using Vulkan @ Phoronix
- Radeon vs. NVIDIA Vulkan Performance For F1 2017 On Linux @ Phoronix
Subject: General Tech | August 24, 2017 - 11:24 AM | Alex Lustenberg
Tagged: vulkan, vlan, video, samsung galaxy note 8, rx vega, podcast, Linksys WRT32x, kaby lake, Intel, ice lake, htc vive, ECS, Core, asus zenphone 4, acer predator z271t
PC Perspective Podcast #464 - 08/17/17
Join us for continued discussion on RX Vega, Intel 8th Gen Core, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano
Peanut Gallery: Ken Addison, Alex Lustenberg
Program length: 1:34:56
Week in Review:
0:07:54 Let’s talk about RX Vega!
Different die packages
News items of interest:
Hardware/Software Picks of the Week
Subject: General Tech | August 19, 2017 - 10:00 PM | Scott Michaud
Tagged: pc gaming, oxide, Oxide Games, vulkan
Oxide Games has been mentioned all throughout the development of the next-generation graphics APIs, DirectX 12, Mantle, and Vulkan. Their Star Swarm “stress test” was one of the first practical examples of a game that desperately needs to make a lot of draw calls. Also, their rendering algorithm is very different from the other popular game engines, where lighting is performed on the object rather than the screen, which the new APIs help out with.
Currently, Ashes of the Singularity supports DirectX 11 and DirectX 12, but Vulkan will be added soon. Oxide will be pushing the new graphics api in the 2.4 update, bringing increased CPU performance to all OSes but especially Windows 7 and 8 (neither of which support DirectX 12), and a free DLC pack that contains nine co-op maps. They also plan to continue optimizing Ashes of the Singularity for Vulkan in the future.
All of this will be available on Thursday, August 24th.
Subject: General Tech | August 17, 2017 - 09:41 PM | Scott Michaud
Tagged: id software, vulkan, doom, Doom 3
Over the last few days, Dustin Land of id Software has been publishing commits to his vkDOOM3 GitHub repository. This project, as the name suggests, adds a Vulkan-based renderer to the game, although it’s not really designed to replace the default OpenGL implementation. Instead, the project is a learning resource, showing how a full application handles the API.
This is quite interesting for me. While code samples can show you how a chunk of code is used in rough isolation, it’s sometimes good to see how it’s used in a broader context. For instance, when I was learning Unreal Engine 4, I occasionally searched into the Unreal Tournament repository for whatever I was learning about. Sometimes, things just don’t “click” until you see the context, especially when your question starts with “why”.
If you’re interested, check out the GitHub repo. You will need to own Doom 3 BFG Edition to actually play it, though.
Subject: Graphics Cards | August 1, 2017 - 12:05 PM | Sebastian Peak
Tagged: Wolfenstein 2, vulkan, Vega, id Tech 6, id software, half-precision, game engine, FP16, amd
According to a report from Golem.de (German language), with the upcoming Wolfenstein 2: The New Colossus game AMD Vega owners will have the advantage of FP16 shader support from a new version of the id Tech 6 engine. The game supports both DX12 and the Vulkan API, but the use of half-precision calculations - the scope of which has not been specified - will potentially offer higher frame-rates for AMD Vega users.
AMD provided some technical details about Wolfenstein 2 during their Threadripper/Vega tech day, and this new game includes “special optimizations” in the id Tech 6 game engine for AMD Vega hardware:
“For what exactly id Software (is using) FP16 instead of FP32, AMD did not say. These could post-processing effects, such as bloom. The performance should increase in the double-digit percentage range, (though) id Software did not want to comment on it.” (Translated from German.)
Subject: General Tech | June 7, 2017 - 09:10 PM | Scott Michaud
Tagged: Qt, vulkan
During our recent interview, the Khronos Group mentioned that one reason to merge into Vulkan was because, at first, the OpenCL working group wasn’t sure whether they wanted an explicit, low-level API, or an easy-to-use one that hides the complexity. Vulkan taught them to take a very low-level position, because there can always be another layer above them that hides complexity to everything downstream of it. This is important for them, because the only layers below them are owned by OS and hardware vendors.
This post is about Qt, though. Qt is a UI middleware, written in C++, that has become very popular as of late. The big revamp of AMD’s control panel with Crimson Edition was a result of switching from .NET to Qt, which greatly sped up launch time. They announced their intent to support the Vulkan API on the very day that it launched.
First and foremost, their last bulletpoint claims that these stances can change as the middleware evolves, particularly with Qt Quick, Qt 3D, Qt Canvas 3D, QPainter, and similar classes. This is a discussion of their support for Qt 5.10 specifically. As it stands, though, Qt intends to focus on cross-platform, window management, and “function resolving for the core API”. The application is expected to manage the rest of the Vulkan API itself (or, of course, use another helper for the other parts).
This makes sense for Qt’s position. Their lowest level classes should do as little as possible outside of what their developers expect, allowing higher-level libraries the most leeway to fill in the gaps. Qt does have higher-level classes, though, and I’m curious what others, especially developers, believe Qt should do with those to take advantage of Vulkan. Especially when we start getting into WYSIWYG editors, like Qt 3D Studio, there is room to do more.
Obviously, the first release isn’t the place to do it, but I’m curious none-the-less.
Subject: General Tech | June 7, 2017 - 04:54 PM | Scott Michaud
Tagged: pc gaming, linux, vulkan, Intel, mesa, feral interactive
According to Phoronix, Alex Smith of Feral Interactive has just published a few changes to the open source Intel graphics driver, which allows their upcoming Dawn of War III port for Linux to render correctly on Vulkan. This means that the open-source Intel driver should support the game on day one, although drawing correctly and drawing efficiently could be two very different things -- or maybe not, we’ll see.
It’s interesting seeing things go in the other direction. Normally, graphics engineers parachute in to high-end developers and help them make the most of their software for each respective, proprietary graphics driver. In this case, we’re seeing the game studios pushing fixes to the graphics vendors, because that’s how open source rolls. It will be interesting to do a pros and cons comparison of each system one day, especially if cross-pollination results from it.
An Data Format for Whole 3D Scenes
The Khronos Group has finalized the glTF 2.0 specification, and they recommend that interested parties integrate this 3D scene format into their content pipeline starting now. It’s ready.
glTF is a format to deliver 3D content, especially full scenes, in a compact and quick-loading data structure. These features differentiate glTF from other 3D formats, like Autodesk’s FBX and even the Khronos Group’s Collada, which are more like intermediate formats between tools, such as 3D editing software (ex: Maya and Blender) and game engines. They don’t see a competing format for final scenes that are designed to be ingested directly, quick and small.
glTF 2.0 makes several important changes.
The previous version of glTF was based on a defined GLSL material, which limited how it could be used, although it did align with WebGL at the time (and that spurred some early adoption). The new version switches to Physically Based Rendering (PBR) workflows to define their materials, which has a few advantages.
First, PBR can represent a wide range of materials with just a handful of parameters. Rather than dictating a specific shader, the data structure can just... structure the data. The industry has settled on two main workflows, metallic-roughness and specular-gloss, and glTF 2.0 supports them both. (Metallic-roughness is the core workflow, but specular-gloss is provided as an extension, and they can be used together in the same scene. Also, during the briefing, I noticed that transparency was not explicitly mentioned in the slide deck, but the Khronos Group confirmed that it is stored as the alpha channel of the base color, and thus supported.) Because the format is now based on existing workflows, the implementation can be programmed in OpenGL, Vulkan, DirectX, Metal, or even something like a software renderer. In fact, Microsoft was a specification editor on glTF 2.0, and they have publicly announced using the format in their upcoming products.
The original GLSL material, from glTF 1.0, is available as an extension (for backward compatibility).
A second advantage of PBR is that it is lighting-independent. When you define a PBR material for an object, it can be placed in any environment and it will behave as expected. Noticeable, albeit extreme examples of where this would have been useful are the outdoor scenes of Doom 3, and the indoor scenes of Battlefield 2. It also simplifies asset creation. Some applications, like Substance Painter and Quixel, have artists stencil materials onto their geometry, like gold, rusted iron, and scuffed plastic, and automatically generate the appropriate textures. It also aligns well with deferred rendering, see below, which performs lighting as a post-process step and thus skip pixels (fragments) that are overwritten.
PBR Deferred Buffers in Unreal Engine 4 Sun Temple.
Lighting is applied to these completed buffers, not every fragment.
glTF 2.0 also improves support for complex animations by adding morph targets. Most 3D animations, beyond just moving, rotating, and scaling whole objects, are based on skeletal animations. This method works by binding vertexes to bones, and moving, rotating, and scaling a hierarchy of joints. This works well for humans, animals, hinges, and other collections of joints and sockets, and it was already supported in glTF 1.0. Morph targets, on the other hand, allow the artist to directly control individual vertices between defined states. This is often demonstrated with a facial animation, interpolating between smiles and frowns, but, in an actual game, this is often approximated with skeletal animations (for performance reasons). Regardless, glTF 2.0 now supports morph targets, too, letting the artists make the choice that best suits their content.
Speaking of performance, the Khronos Group is also promoting “enhanced performance” as a benefit of glTF 2.0. I asked whether they have anything to elaborate on, and they responded with a little story. While glTF 1.0 validators were being created, one of the engineers compiled a list of design choices that would lead to minor performance issues. The fixes for these were originally supposed to be embodied in a glTF 1.1 specification, but PBR workflows and Microsoft’s request to abstract the format away from GLSL lead to glTF 2.0, which is where the performance optimization finally ended up. Basically, there wasn’t just one or two changes that made a big impact; it was the result of many tiny changes that add up.
Also, the binary version of glTF is now a core feature in glTF 2.0.
The slide looks at the potential future of glTF, after 2.0.
Looking forward, the Khronos Group has a few items on their glTF roadmap. These did not make glTF 2.0, but they are current topics for future versions. One potential addition is mesh compression, via the Google Draco team, to further decrease file size of 3D geometry. Another roadmap entry is progressive geometry streaming, via Fraunhofer SRC, which should speed up runtime performance.
Yet another roadmap entry is “Unified Compression Texture Format for Transmission”, specifically Basis by Binomial, for texture compression that remains as small as possible on the GPU. Graphics processors can only natively operate on a handful of formats, like DXT and ASTC, so textures need to be converted when they are loaded by an engine. Often, when a texture is loaded at runtime (rather than imported by the editor) it will be decompressed and left in that state on the GPU. Some engines, like Unity, have a runtime compress method that converts textures to DXT, but the developer needs to explicitly call it and the documentation says it’s lower quality than the algorithm used by the editor (although I haven’t tested this). Suffices to say, having a format that can circumvent all of that would be nice.
Again, if you’re interested in adding glTF 2.0 to your content pipeline, then get started. It’s ready. Microsoft is doing it, too.
Subject: Graphics Cards, Mobile | June 2, 2017 - 02:23 AM | Scott Michaud
Tagged: Imagination Technologies, PowerVR, ray tracing, ue4, vulkan
Imagination Technologies has published another video that demonstrates ray tracing with their PowerVR Wizard GPU. The test system, today, is a development card that is running on Ubuntu, and powering Unreal Engine 4. Specifically, it is using UE4’s Vulkan renderer.
The demo highlights two major advantages of ray traced images. The first is that, rather than applying a baked cubemap with screen-space reflections to simulate metallic objects, this demo calculates reflections with secondary rays. From there, it’s just a matter of hooking up the gathered information into the parameters that the shader requires and doing the calculations.
The second advantage is that it can do arbitrary lens effects, like distortion and equirectangular, 360 projections. Rasterization, which projects 3D world coordinates into 2D coordinates on a screen, assumes that edges are still straight, and that causes problems as FoV gets very large, especially full circle. Imagination Technologies acknowledges that workarounds exist, like breaking up the render into six faces of a cube, but the best approximation is casting a ray per pixel and seeing what it hits.
The demo was originally for GDC 2017, back in February, but the videos have just been released.