Speed Metal on the Desktop

Subject: General Tech | November 13, 2017 - 03:33 PM |
Tagged: 3d printing, metal, Desktop Metal

Desktop Metal's new printer follows the same design process as current 3D metal printing, layers of metal powder, wax and a plastic binding agent are sprayed out by an inkjet-like device.  Upon completion of the print, the item is submerged in a debinding fluid which disolves the wax and then spends some time in a furnace to burn off the binding agent and set the powder leaving the final product between 96 and 99.8% metal.  This process is currently handled much more quickly via traditional tool and die, however Desktop Metal told The Register their new printer operates at 100 times the speed of the competition and at a very competitive price to either tool and die or 3D printing.  It will be interesting to see if this applies to a wide enough variety of prints and provides high enough quality to unseat the incumbent processes.

DM_logo.jpg

"Desktop Metal, based in Boston, USA, has opened up pre-orders for its Studio System which uses inkjet-like technology, rather than laser-based techniques, to produce precision metal parts."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register
Subject: General Tech
Manufacturer: The Khronos Group

An Data Format for Whole 3D Scenes

The Khronos Group has finalized the glTF 2.0 specification, and they recommend that interested parties integrate this 3D scene format into their content pipeline starting now. It’s ready.

khronos-2017-glTF_500px_June16.png

glTF is a format to deliver 3D content, especially full scenes, in a compact and quick-loading data structure. These features differentiate glTF from other 3D formats, like Autodesk’s FBX and even the Khronos Group’s Collada, which are more like intermediate formats between tools, such as 3D editing software (ex: Maya and Blender) and game engines. They don’t see a competing format for final scenes that are designed to be ingested directly, quick and small.

glTF 2.0 makes several important changes.

The previous version of glTF was based on a defined GLSL material, which limited how it could be used, although it did align with WebGL at the time (and that spurred some early adoption). The new version switches to Physically Based Rendering (PBR) workflows to define their materials, which has a few advantages.

khronos-2017-PBR material model in glTF 2.0.jpg

First, PBR can represent a wide range of materials with just a handful of parameters. Rather than dictating a specific shader, the data structure can just... structure the data. The industry has settled on two main workflows, metallic-roughness and specular-gloss, and glTF 2.0 supports them both. (Metallic-roughness is the core workflow, but specular-gloss is provided as an extension, and they can be used together in the same scene. Also, during the briefing, I noticed that transparency was not explicitly mentioned in the slide deck, but the Khronos Group confirmed that it is stored as the alpha channel of the base color, and thus supported.) Because the format is now based on existing workflows, the implementation can be programmed in OpenGL, Vulkan, DirectX, Metal, or even something like a software renderer. In fact, Microsoft was a specification editor on glTF 2.0, and they have publicly announced using the format in their upcoming products.

The original GLSL material, from glTF 1.0, is available as an extension (for backward compatibility).

A second advantage of PBR is that it is lighting-independent. When you define a PBR material for an object, it can be placed in any environment and it will behave as expected. Noticeable, albeit extreme examples of where this would have been useful are the outdoor scenes of Doom 3, and the indoor scenes of Battlefield 2. It also simplifies asset creation. Some applications, like Substance Painter and Quixel, have artists stencil materials onto their geometry, like gold, rusted iron, and scuffed plastic, and automatically generate the appropriate textures. It also aligns well with deferred rendering, see below, which performs lighting as a post-process step and thus skip pixels (fragments) that are overwritten.

epicgames-2017-suntempledeferred.png

PBR Deferred Buffers in Unreal Engine 4 Sun Temple.
Lighting is applied to these completed buffers, not every fragment.

glTF 2.0 also improves support for complex animations by adding morph targets. Most 3D animations, beyond just moving, rotating, and scaling whole objects, are based on skeletal animations. This method works by binding vertexes to bones, and moving, rotating, and scaling a hierarchy of joints. This works well for humans, animals, hinges, and other collections of joints and sockets, and it was already supported in glTF 1.0. Morph targets, on the other hand, allow the artist to directly control individual vertices between defined states. This is often demonstrated with a facial animation, interpolating between smiles and frowns, but, in an actual game, this is often approximated with skeletal animations (for performance reasons). Regardless, glTF 2.0 now supports morph targets, too, letting the artists make the choice that best suits their content.

Speaking of performance, the Khronos Group is also promoting “enhanced performance” as a benefit of glTF 2.0. I asked whether they have anything to elaborate on, and they responded with a little story. While glTF 1.0 validators were being created, one of the engineers compiled a list of design choices that would lead to minor performance issues. The fixes for these were originally supposed to be embodied in a glTF 1.1 specification, but PBR workflows and Microsoft’s request to abstract the format away from GLSL lead to glTF 2.0, which is where the performance optimization finally ended up. Basically, there wasn’t just one or two changes that made a big impact; it was the result of many tiny changes that add up.

Also, the binary version of glTF is now a core feature in glTF 2.0.

khronos-2017-gltfroadmap.png

The slide looks at the potential future of glTF, after 2.0.

Looking forward, the Khronos Group has a few items on their glTF roadmap. These did not make glTF 2.0, but they are current topics for future versions. One potential addition is mesh compression, via the Google Draco team, to further decrease file size of 3D geometry. Another roadmap entry is progressive geometry streaming, via Fraunhofer SRC, which should speed up runtime performance.

Yet another roadmap entry is “Unified Compression Texture Format for Transmission”, specifically Basis by Binomial, for texture compression that remains as small as possible on the GPU. Graphics processors can only natively operate on a handful of formats, like DXT and ASTC, so textures need to be converted when they are loaded by an engine. Often, when a texture is loaded at runtime (rather than imported by the editor) it will be decompressed and left in that state on the GPU. Some engines, like Unity, have a runtime compress method that converts textures to DXT, but the developer needs to explicitly call it and the documentation says it’s lower quality than the algorithm used by the editor (although I haven’t tested this). Suffices to say, having a format that can circumvent all of that would be nice.

Again, if you’re interested in adding glTF 2.0 to your content pipeline, then get started. It’s ready. Microsoft is doing it, too.

WebKit Proposal for WebGPU

Subject: General Tech | February 8, 2017 - 10:46 PM |
Tagged: webkit, webgpu, metal, vulkan, webgl

Apple’s WebKit team has just announced their proposal for WebGPU, which competes with WebGL to provide graphics and GPU compute to websites. Being from Apple, it is based on the Metal API, so it has a lot of potential, especially as a Web graphics API.

Okay, so I have mixed feelings about this announcement.

apple-2017-webkit-logo.png

First, and most concerning, is that Apple has attempted to legally block open standards in the past. For instance, when The Khronos Group created WebCL based on OpenCL, which Apple owns the trademark and several patents to, Apple shut the door on extending their licensing agreement to the new standard. If the W3C considers Apple’s proposal, they should be really careful about what legal control they allow Apple to retain.

From a functionality standpoint, this is very interesting, though. With the aforementioned death of WebCL, as well as the sluggish progress of WebGL Compute Shaders, there’s a lot of room to use one (or more) GPUs in a system for high-end compute tasks. Even if you are not interested in gaming on a web browser, although many people are, especially if you count the market that Adobe Flash dominated for the last ten years, you might want to GPU-accelerate photo and video tasks. Having an API that allows for this would be very helpful going forward, although, as stated, others are working on it, like The Khronos Group with WebGL compute shaders. On the other-other hand, an API that allows explicit multi-GPU would be even more interesting.

Further, it sounds like they’re even intending to ingest byte-code, like what DirectX 12 and Vulkan are doing with DXIL and SPIR-V, respectively, but it currently accepts shader code as a string and compiles it in the driver. This is interesting from a security standpoint, because it obfuscates what GPU-executed code consists of, but that’s up to the graphics and browser vendors to figure out... for now.

So when will we see it? No idea! There’s an experimental WebKit patch, which requires the Metal API, and an API proposal... a couple blog posts... a tweet or two... and that’s about it.

So what do you think? Does the API sound interesting? Does Apple’s involvement scare you? Or does getting scared about Apple’s involvement annoy you? Comment away! : D

Source: WebKit

Who Decided to Call a Lightweight API "Metal"?

Subject: Graphics Cards | October 7, 2015 - 07:01 AM |
Tagged: opengl, metal, apple

Ars Technica took it upon themselves to benchmark Metal in the latest OSX El Capitan release. Even though OpenGL on Mac OSX is not considered to be on par with its Linux counterparts, which is probably due to the driver situation until recently, it pulls ahead of Metal in many situations.

apple-2015-geforce-benchmark-metal.png

Image Credit: Ars Technica

Unlike the other graphics APIs, Metal uses the traditional binding model. Basically, you have a GPU object that you attach your data to, then call one of a handful of “draw” functions to signal the driver. DirectX 12, Vulkan, and Mantle, on the other hand, treat work like commands on queues. The latter model works better in multi-core environments, and it aligns with GPU compute APIs, but the former is easier to port OpenGL and DirectX 11 applications to.

Ars Technica notes that faster GPUs, such as the NVIDIA GeForce GTX 680MX, show higher gains than slower ones. Their “best explanation” is that “faster GPUs can offload more work from the CPU”. That is pretty much true, yes. The new APIs are designed to keep GPUs loaded and working as much as possible, because they really do sit around doing nothing a lot. If you are able to keep a GPU loaded, because it can't accept much load in the first place, then there is little benefit to decreasing CPU load or spreading out across multiple cores.

Granted, there are many ways that benchmarks like these could be incorrectly used. I'll assume that Ars Technica and GFXBench are not making any simple mistakes, though, but it's good to be critical just in case.

Source: Ars Technica