Realtime Raytracing Commit Spotted in Unity GitHub

Subject: General Tech | September 14, 2018 - 10:32 PM |
Tagged: rtx, Unity, ray tracing, directx raytracing, DirectX 12

As Ken wrote up his take in a separate post, NVIDIA has made Turing architecture details public, which will bring real-time ray tracing to PC gaming later this month. When it was announced, NVIDIA had some demos in Unreal Engine 4, and a few partnered games (Battlefield V, Shadow of the Tomb Raider, and Metro Exodus) showed off their implementations.

As we expected, Unity is working on supporting it too.

unity-2018-book-of-the-dead.jpg

Not ray tracing, but from the same project at Unity.

The first commit showed up on Unity’s GitHub for their Scriptable Render Pipelines project, dated earlier today. Looking through the changes, it appears to just generate the acceleration structure based on the objects of type renderer in the current scene (as well as define the toggle properties of course). It looks like we are still a long way out.

I’m looking forward to ray tracing implementations, though. I tend to like art styles with anisotropic metal trims and soft shadows, which is difficult to get right with rasterization alone due to the reliance on other objects in the scene. In the case of metal, reflections dominate the look and feel of the material. In the case of soft shadows, you really need to keep track of how much of a light has been blocked between the rendered fragment and the non-point light.

And yes, it will depend on the art style, but mine just happens to be computationally expensive.

Unity 2018.2 Released

Subject: General Tech | July 10, 2018 - 10:35 PM |
Tagged: Unity, pc gaming

The second Unity update of 2018 has been published to their website today. This version continues their work on Scriptable Render Pipelines, including their own Lightweight Render Pipeline (LWRP) and High Definition Render Pipeline (HDRP) implementations. Both are still considered a preview, but the aim is to replace the standard shader with two optimized graphics pipelines, one tuned for performance (mobile, VR, and extra performance on higher-end devices) and one tuned for high-end effects (multiple aligned transparent objects, reflections, etc.).

unity-2018-2018.1_image31.jpg

This splits Unity’s customer base from “one-size-fits-all-sort-of” to two sizes, although developers can also create their own scriptable render pipeline. This will let them tune the graphics pipeline to whatever their game is, although it seems to mean that they will need to make a lot of their own graphics technologies if they do. (This seems clearly targeted at mid- to large-sized studios, but that’s just my opinion.) Of course, they can also continue to use the standard shader, but some Unity talks has already suggested that not all new features will come to the old pipeline.

2018.2 also continues development of the C# Job System, ECS design pattern, and their Burst compiler. A separate announcement was made about the Burst compiler – that it is now available as an official preview package.

Source: Unity

Unity 2018.1 Released

Subject: General Tech | May 2, 2018 - 08:13 PM |
Tagged: Unity, video games

The first entry in Unity’s 2018.x line of releases has just been published to their website. Developers can now choose to migrate their projects to it and expect official support until 2018.2, or they can stick with 2017.4 and for two years – but not get new engine features. That said, if you have a big project that is expected to ship within a handful of months, then you may just want things to stay constant and stable until you ship (and maybe publish a wave of DLC).

unity-2018-2018.1_image31.jpg

There’s a few big additions with this version, but the ones I care about most are still in preview. The first is, of course, the ECS design pattern with the C# Job System. It is still super early days with this one, but I’m very interested in a rigid, optimized, data-driven workflow that makes it easy to batch tasks together. Give me constraints – it’s okay – if I can get value from it.

Then we get to the Scriptable Render Pipeline and its two presets: High-Definition Render Pipeline and Lightweight Render Pipeline. This allows the developer to control how their content is rendered, including how it is culled, how materials and lights are processed, what materials and lights can do, can so forth. They also say that some features will only come to the High-Definition Render Pipeline to get people off the standard workflow into the new render path… but I wonder how that will affect developers who create their own scriptable render pipeline. It’s reasonable to assume that a developer who makes their own path will need to do some level of tweaking to get new features, but I wonder how much effort Unity will put into helping those developers.

There is also a new, beta update to the Post Processing stack. This should be familiar to users of Unreal Engine 4. Unity has continued to push a bunch of effects, like color grading, bloom, reflection, ambient occlusion, certain antialiasing techniques, and so forth, into a canonical suite of filters. They have also added volumes that developers can place in their scene to add a hierarchy for smooth transitions between effects.

From a practical standpoint, the new package manager also looks very interesting. There’s not much to write about for it, in an enthusiast PC hardware site at least, but it could be a nice way of delivering features to users. Instead of waiting for a whole new Unity release, you can fiddle with new features on a one-by-one basis. Maybe even third-party content, typically found in the asset store, can find its way on there – with a network of dependencies that it just sorts out for you.

Check it out on Unity’s website.

Source: Unity

zSpace and Unity Announces XR Resources for Education

Subject: General Tech | November 5, 2017 - 08:14 PM |
Tagged: Unity, zspace, xr, AR, VR

The Unity Educator Toolkit was created by Unity3D to integrate learning game development into the K-12 public curriculum. Now zSpace, which we’ve mentioned a few times, is joining in to the initiative with their mixed-reality platform. The company is known for creating displays that, when viewed with their glasses, track where you are and make the object appear to be in front of you. They also have a stylus that lets you interact with the virtual object.

zspace-2017-elephant.jpg

They are focused on the educational side of VR and AR.

It’s not entirely clear what this means, because a lot of the details are behind a sign-up process. That said, if you’re an educator, then check out the package to see if it’s relevant for you. Creating games is an interesting, albeit challenging and somewhat daunting, method of expressing oneself. Giving kids the tools to make little game jam-style expressions, or even using the technology in your actual lessons, will reach a new group of students.

Source: zSpace

OTOY Discussed AI Denoising at Unite Austin

Subject: General Tech | October 4, 2017 - 08:59 PM |
Tagged: 3D rendering, otoy, Unity, deep learning

When raytracing images, sample count has a massive impact on both quality and rendering performance. This corresponds to the number of rays within a pixel that were cast, which, when averaged out over many, many rays, eventually matches what the pixel should be. Think of it this way: if your first ray bounces directly into a bright light, and the second ray bounces into the vacuum of space, should the color be white? Black? Half-grey? Who knows! However, if you send 1000 rays with some randomized pattern, then the average is probably a lot closer to what it should be (which depends on how big the light is, what it bounces off of, etc.).

At Unite Austin, which started today, OTOY showed off an “AI temporal denoiser” algorithm for raytraced footage. Typically, an artist chooses a sample rate that looks good enough to the end viewer. In this case, the artist only needs to choose enough samples that an AI can create a good-enough video for the end user. While I’m curious how much performance is required in the inferencing stage, I do know how much a drop in sample rate can affect render times, and it’s a lot.

Check out OTOY’s video, embed above.

Unity 2017.2.0f1 Released

Subject: General Tech | September 23, 2017 - 12:22 PM |
Tagged: pc gaming, Unity

While it’s not technically released yet, Unity has flipped the naming scheme of Unity 2017.2 to Unity 2017.2.0f1. The “f” stands for final, so we will probably see a blog post on it soon. This version has a handful of back-end changes, such as improved main-thread performance when issuing commands to graphics APIs, but the visible changes are mostly in two areas: XR (VR + AR) and baked lighting.

unity-logo-rgb.png

From the XR standpoint, a few additions stand out. First, this version now supports Google Tango and Windows Mixed Reality, the latter of which is tied to the Windows 10 Fall Creators Update, so it makes sense that Unity would have support in the version before that gets released (October 17th). In terms of features, the editor now supports emulating a Vive headset, so you can test some VR elements without having a headset. I expect this will mostly be good for those who want to do a bit of development in places where they don’t have access to their headset, although that’s blind speculation from my standpoint.

The other area that got a boost is baked global illumination. Unity started introducing their new Progressive Lightmapping feature in Unity 5.6, and it bakes lighting into the scenes in the background as you work. This update allows you to turn shadows on and off on a per-object basis, and it supports double-sided materials. You cannot have independent lighting calculations for the front and back of a triangle... if you want that, then you will need to give some volume to your models. This is mostly for situations like the edge of a level, so you don’t need to create a second wall facing away from the playable area to block light coming in from outside the playable area.

I’m not sure when the official release is, but it looks like the final, supported build is out now.

Source: Unity
Author:
Subject: General Tech
Manufacturer: SILVIA

Intelligent Gaming

Kal Simpson recently had the chance to sit down and have an extensive interview with SILVIA's Chief Product Officer - Cognitive Code, Alex Mayberry.  SILVIA is a company that specializes on conversational AI that can be adapted to a variety of platforms and applications.  Kal's comments are in bold while Alex's are in italics.

SILVIA virtual assistant.jpg

Always good to speak with you Alex. Whether it's the latest Triple-A video game release or the progress being made in changing the way we play, virtual reality for instance – your views and developments within the gaming space as a whole remains impressive. Before we begin, I’d like to give the audience a brief flashback of your career history. Prominent within the video game industry you’ve been involved with many, many titles – primarily within the PC gaming space. Quake 2: The Reckoning, America’s Army, a plethora of World of Warcraft titles.

Those more familiar with your work know you as the lead game producer for Diablo 3 / Reaper of Souls, as well as the executive producer for Star Citizen. The former of which we spoke on during the release of the game for PC, PlayStation 4 and the Xbox One, back in 2014.

So I ask, given your huge involvement with some of the most popular titles, what sparked your interest within the development of intelligent computing platforms? No-doubt the technology can be adapted to applications within gaming, but what’s the initial factor that drove you to Cognitive Code – the SILVIA technology?

AM: Conversational intelligence was something that I had never even thought about in terms of game development. My experience arguing with my Xbox and trying to get it to change my television channel left me pretty sceptical about the technology. But after leaving Star Citizen, my paths crossed with Leslie Spring, the CEO and Founder of Cognitive Code, and the creator of the SILVIA platform. Initially, Leslie was helping me out with some engineering work on VR projects I was spinning up. After collaborating for a bit, he introduced me to his AI, and I became intrigued by it. Although I was still very focused on VR at the time, my mind kept drifting to SILVIA.

I kept pestering Leslie with questions about the technology, and he continued to share some of the things that it could do. It was when I saw one of his game engine demos showing off a sci-fi world with freely conversant robots that the light went on in my head, and I suddenly got way more interested in artificial intelligence. At the same time, I was discovering challenges in VR that needed solutions. Not having a keyboard in VR creates an obstacle for capturing user input, and floating text in your field of view is really detrimental to the immersion of the experience. Also, when you have life-size characters in VR, you naturally want to speak to them. This is when I got interested in using SILVIA to introduce an entirely new mechanic to gaming and interactive entertainment. No more do we have to rely on conversation trees and scripted responses.

how-silvia-work1.jpg

No more do we have to read a wall of text from a quest giver. With this technology, we can have a realistic and free-form conversation with our game characters, and speak to them as if they are alive. This is such a powerful tool for interactive storytelling, and it will allow us to breathe life into virtual characters in a way that’s never before been possible. Seeing the opportunity in front of me, I joined up with Cognitive Code and have spent the last 18 months exploring how to design conversationally intelligent avatars. And I’ve been having a blast doing it.

Click here to continue reading the entire interview!

Unity Labs Announces Global Research Fellowship

Subject: General Tech | June 28, 2017 - 11:17 PM |
Tagged: Unity, machine learning, deep learning

Unity, who makes the popular 3D game engine of the same name, has announced a research fellowship for integrating machine learning into game development. Two students, who must have been enrolled in a Masters or a PhD program on June 26th, will be selected and provided with $30,000 for a 6-month fellowship. The deadline is midnight (PDT) on September 9th.

unity-logo-rgb.png

We’re beginning to see a lot of machine-learning applications being discussed for gaming. There are some cases, like global illumination and fluid simulations, where it could be faster for a deep-learning algorithm to hallucinate a convincing than a physical solver will produce a correct one. In this case, it makes sense to post-process each frame, so, naturally, game engine developers are paying attention.

If eligible, you can apply on their website.

Source: Unity

Unity 5.6 Released with Vulkan Support

Subject: General Tech | April 1, 2017 - 07:54 PM |
Tagged: Unity, pc gaming, vulkan

If you are a perpetual license holder for Unity 5.x, then your last free update has just arrived. Unity 5.6 brings Vulkan for Windows, Linux, and Android. I just installed the new version and checked to see which graphics APIs it uses on Windows when you uncheck the auto box, and the list comprises of DirectX 11 and DirectX 9. It’s possible that auto could be choosing Vulkan, but I’m not going to query which process is loading which DLL under a variety of conditions. If you’re interested in Unity development, go to File -> Build Settings -> Player Settings -> Other Settings and choose the load order of your APIs, using the + button to add one that’s not there by default.

unity-logo-rgb.png

The lighting system should be more impressive, though. In Unreal Engine 4, I’m used to having dynamic lighting until I stop everything and start a lighting bake. When it’s done, I have static lighting until I invalidate it with a change (and the level is set to invalidate light maps on changes). In Unity 5.6’s case, though, it will just slowly replace the light maps as they are calculated, getting progressively higher quality. Since you can notice problems at low quality, you only need to wait as long as it’s required to see the errors, which speeds up development.

In terms of platforms, Unity 5.6 adds Daydream, Cardboard, Nintendo Switch, and WebAssembly.

Unity 5.6 is available now. The preview of Unity 2017, the next version, should arrive this month.

Source: Unity

Unity 5.6 Beta Supports Vulkan API

Subject: General Tech | December 22, 2016 - 06:54 PM |
Tagged: pc gaming, Unity, vulkan

One of the most popular video game engines, Unity, has released a beta for Unity 5.6, which will be the last version of Unity 5.x. This release pushes Vulkan into full support on both desktop and mobile, which actually beats Unreal Engine 4 on the desktop side of things. Specifically, Vulkan is available for the Android, Windows, Linux, and Tizen operating systems. Apple users should be happy that this version also updates Metal for iOS and macOS, but Apple is still preventing vendors from shipping Vulkan drivers so you really shouldn’t feel too happy.

unity-logo-rgb.png

At Unity’s Unity 2016 keynote, the company claimed about 30-60% better performance on the new API “out-of-the-box”. I do find this statement slightly odd, though, because Unity doesn’t really provide much access to “the box” without expensive source code up-sells. The most user involvement of the engine internals, for what I would assume is the majority of projects, is buying and activating a plug-in, and Vulkan would be kind-of crappy to hide behind a pay wall.

I mentioned that this will be the last Unity 5.x version. While the difference between a major and a minor version number tends to be just marketing these days, Unity is changing their major version to align with the year that it belongs to. Expect future versions, starting with a beta version in April, to be numbered 2017.x.

Unity 5.6 comes out of beta in March.

Source: Unity