Subject: General Tech | December 3, 2018 - 08:00 AM | Sebastian Peak
Tagged: ue4, nvidia, NeurIPS, deep learning, ai, 3D rendering
NVIDIA has introduced new research at the NeurIPS AI conference in Montreal that allows rendering of 3D environments from models trained on real-world videos. It's a complex topic that does have potential beyond scientific research with possible application for game developers, though this is not to the "product" stage just yet. A video accompanying the press release today shows how the researchers have implemented this technology so far:
"Company researchers used a neural network to apply visual elements from existing videos to new 3D environments. Currently, every object in a virtual world needs to be modeled. The NVIDIA research uses models trained from video to render buildings, trees, vehicles and objects."
The AI-generated city of a simple driving game demo shown at the NeurIPS AI conference gives us an early look at the sort of 3D environment that can be rendered by the neural network, as "the generative neural network learned to model the appearance of the world, including lighting, materials and their dynamics" from video footage, and this was rendered as the game environment using Unreal Engine 4.
"The technology offers the potential to quickly create virtual worlds for gaming, automotive, architecture, robotics or virtual reality. The network can, for example, generate interactive scenes based on real-world locations or show consumers dancing like their favorite pop stars."
Beyond video-to-video this research can also be applied to still images, with models providing the basis for what is eventually rendered movement (the video embedded above includes a demonstration of this aspect of the research - and yes, dancing is involved). And while all of this might be a year or two away from appearing in a new game release, but the possibilities are fascinating to contemplate, to say the least.
Subject: General Tech | October 4, 2017 - 09:19 PM | Scott Michaud
Tagged: render token, ethereum, 3D rendering
You know how people have been buying up GPUs to mine coin? A new company, Render Token, has just announced a service that works in a similar way, except that the output is rendered images. A better example would be something like Folding@Home, but the user is paid for the work that their computer performs. The CEO and the President, Jules Urbach and Alissa Grainger respectively, are co-founders of OTOY, which does GPU- and Cloud-accelerated rendering.
According to Jules Urbach at Unite Austin, they are apparently paying, deliberately, more than ethereum would give users for the same amount of processing power.
I am... torn on this issue. On the one hand, it’s a cool application of crowd-sourced work, and it helps utilize idle silicon scattered around the globe. On the other hand, I hope that this won’t kick GPU supply levels while they’re down. Sure, at least there’s some intrinsic value to the workload, but I can just see people sticking racks of caseless systems in their basement, while gamers keep browsing Amazon for something under four digits (excluding the cents) to appear in stock.
What do you all think? Does the workload usefulness dull the pain?
Subject: General Tech | October 4, 2017 - 08:59 PM | Scott Michaud
Tagged: 3D rendering, otoy, Unity, deep learning
When raytracing images, sample count has a massive impact on both quality and rendering performance. This corresponds to the number of rays within a pixel that were cast, which, when averaged out over many, many rays, eventually matches what the pixel should be. Think of it this way: if your first ray bounces directly into a bright light, and the second ray bounces into the vacuum of space, should the color be white? Black? Half-grey? Who knows! However, if you send 1000 rays with some randomized pattern, then the average is probably a lot closer to what it should be (which depends on how big the light is, what it bounces off of, etc.).
At Unite Austin, which started today, OTOY showed off an “AI temporal denoiser” algorithm for raytraced footage. Typically, an artist chooses a sample rate that looks good enough to the end viewer. In this case, the artist only needs to choose enough samples that an AI can create a good-enough video for the end user. While I’m curious how much performance is required in the inferencing stage, I do know how much a drop in sample rate can affect render times, and it’s a lot.
Check out OTOY’s video, embed above.
Subject: General Tech, Graphics Cards | July 25, 2016 - 09:48 PM | Sebastian Peak
Tagged: siggraph 2016, Siggraph, capsaicin, amd, 3D rendering
At their Capsaicin Siggraph event tonight AMD has announced that what was previously announced as the FireRender rendering engine is being officially launched as AMD Radeon ProRender, and this is becoming open-source as part of AMD's GPUOpen initiative.
From AMD's press release:
AMD today announced its powerful physically-based rendering engine is becoming open source, giving developers access to the source code.
As part of GPUOpen, Radeon ProRender (formerly previewed as AMD FireRender) enables creators to bring ideas to life through high-performance applications and workflows enhanced by photorealistic rendering.
GPUOpen is an AMD initiative designed to assist developers in creating ground-breaking games, professional graphics applications and GPU computing applications with much greater performance and lifelike experiences, at no cost and using open development tools and software.
Unlike other renderers, Radeon ProRender can simultaneously use and balance the compute capabilities of multiple GPUs and CPUs – on the same system, at the same time – and deliver state-of-the-art GPU acceleration to produce rapid, accurate results.
Radeon ProRender plugins are available today for many popular 3D content creation applications, including Autodesk® 3ds Max®, SOLIDWORKS by Dassault Systèmes and Rhino®, with Autodesk® Maya® coming soon. Radeon ProRender works across Windows®, OS X and Linux®, and supports AMD GPUs, CPUs and APUs as well as those of other vendors.
Subject: General Tech | July 25, 2016 - 04:47 PM | Scott Michaud
Tagged: nvidia, mental ray, maya, 3D rendering
NVIDIA purchased Mental Images, the German software developer that makes the mental ray renderer, all the way back in 2007. It has been bundled with every copy of Maya for a very long time now. In fact, my license of Maya 8, which I purchased back in like, 2006, came with mental ray in both plug-in format, and stand-alone.
Interestingly, even though nearly a decade has passed since NVIDIA's acquisition, Autodesk has been the middle-person that end-users dealt with. This will end soon, as NVIDIA announced, at SIGGRAPH, that they will “be serving end users directly” with their mental ray for Maya plug-in. The new plug-in will show results directly in the viewport, starting at low quality and increasing until the view changes. They are obviously not the first company to do this, with Cycles in Blender being a good example, but I would expect that it is a welcome feature for users.
Benchmark results are by NVIDIA
At the same time, they are also announcing GI-Next. This will speed up global illumination in mental ray, and it will also reduce the number of options required to tune the results to just a single quality slider, making it easier for artists to pick up. One of their benchmarks shows a 26-fold increase in performance, although most of that can be attributed to GPU acceleration from a pair of GM200 Quadro cards. CPU-only tests of the same scene show a 4x increase, though, which is still pretty good.
The new version of mental ray for Maya is expected to ship in September, although it has been in an open beta (for existing Maya users) since February. They do say that “pricing and policies will be announced closer to availability” though, so we'll need to see, then, how different the licensing structure will be. Currently, Maya ships with a few licenses of mental ray out of the box, and has for quite some time.
Subject: General Tech | September 12, 2014 - 04:41 PM | Jeremy Hellstrom
Tagged: Realsense 3D, idf 2014, 3D rendering, 3d printing
There was an interesting use of Intel's Realsense 3D technology displayed at IDF by a company called Volumental. By using a new product with the new style of camera it would be possible to make a 3D map of your body accurate enough to make clothing patterns from. The example offered were a pair shoes that could be ordered online with no concerns about the fit as the shoes would be perfect for you. That is just the beginning though, you would also be able to order a perfectly tailored suit online without ever needing to appear in person for a fitting. It could also lead to an even worse Fappening in the future; choose your online clothing supplier carefully. There is more here at The Inquirer.
"The proof of concept software, called Volume Voice[sic], accurately scans parts of the human body with Intel's Realsense 3D depth cameras, which will soon feature on Intel-powered laptops and tablets. Volumental's cloud-based platform will then allow individuals to create products that are tailored to their own bodies, for example, shoes that fit perfectly without the need to try them on before buying."
Here is some more Tech News from around the web:
- TSMC forms IoT task force @ DigiTimes
- Intel launches digital signage turnkey solutions @ DigiTimes
- No TKO for LTO: Tape format spawns another 2 generations, sports 120TB bigness @ The Register
- Leak of '5 MEELLLION Gmail passwords' creates security flap @ The Register
- iPhone 6 Plus first impressions @ The Inquirer
- TP-Link AV500 Powerline @ HardwareHeaven
Subject: General Tech | April 2, 2013 - 02:54 AM | Tim Verry
Tagged: next generation character rendering, GDC 13, gaming, Activision, 3D rendering
Activision recently showed off its Next-Generation Character Rendering technology, which is a new method for rendering realistic and high-quality 3D faces. The technology has been in the works for some time now, and is now at a point where faces are extremely detailed down to pores, freckles, wrinkles, and eye lashes.
In addition to Lauren, Activision also showed off its own take on the face used in NVIDIA's Ira FaceWorks tech demo. Except instead of the NVIDIA rendering, the face was done using Activision's own Next-Generation Character Rendering technology. A method that is allegedly more efficient and "completely different" than the one used for Ira. In a video showing off the technology (embedded below), the Activision method produces some impressive 3D renders in real time, but when talking appear to be a bit creepy-looking and unnatural. Perhaps Activision and NVIDIA should find a way to combine the emotional improvements of Ira with the graphical prowess of NGCR (and while we are making a wish list, I might as well add TressFX support... heh).
The high resolution faces are not quite ready for the next Call of Duty, but the research team has managed to get models to render at 180 FPS on a PC running a single GTX 680 graphics card. That is not enough to implement the technology in a game, where there are multiple models, the environment, physics, AI, and all manner of other calculations to deal with and present at acceptable frame rates, but it is nice to see this kind of future-looking work being done now. Perhaps in a few graphics card generations the hardware will catch up to the face rendering technology that Activision (and others) are working on, which will be rather satisfying to see. It is amazing how far the graphics world has come since I got into PC gaming with Wolfenstein 3D, to say the least!
The team behind Activision's Next-Generation Character Rendering technology includes:
|Javier Von Der Pahlen||Director of Research and Development|
|Etienne Donvoye||Technical Director|
|Bernardo Antoniazzi||Technical Art Director|
|Zbyněk Kysela||Modeler and Texture Artist|
|Mike Eheler||Programming and Support|
|Jorge Jimenez||Real-Time Graphics Research and Development|
Jorge Jimenez has posted several more screenshots of the GDC tech demo on his blog that are worth checking out if you are interested in the new rendering tech.