It's all fun and games until something something AI.
Microsoft announced the Windows Machine Learning (WinML) API about two weeks ago, but they did so in a sort-of abstract context. This week, alongside the 2018 Game Developers Conference, they are grounding it in a practical application: video games!
Specifically, the API provides the mechanisms for game developers to run inference on the target machine. The training data that it runs against would be in the Open Neural Network Exchange (ONNX) format from Microsoft, Facebook, and Amazon. Like the initial announcement suggests, it can be used for any application, not just games, but… you know. If you want to get a technology off the ground, and it requires a high-end GPU, then video game enthusiasts are good lead users. When run in a DirectX application, WinML kernels are queued on the DirectX 12 compute queue.
We’ve discussed the concept before. When you’re rendering a video game, simulating an accurate scenario isn’t your goal – the goal is to look like you are. The direct way of looking like you’re doing something is to do it. The problem is that some effects are too slow (or, sometimes, too complicated) to correctly simulate. In these cases, it might be viable to make a deep-learning AI hallucinate a convincing result, even though no actual simulation took place.
Fluid dynamics, global illumination, and up-scaling are three examples.
Previously mentioned SIGGRAPH demo of fluid simulation without fluid simulation...
... just a trained AI hallucinating a scene based on input parameters.
Another place where AI could be useful is… well… AI. One way of making AI is to give it some set of data from the game environment, often including information that a player in its position would not be able to know, and having it run against a branching logic tree. Deep learning, on the other hand, can train itself on billions of examples of good and bad play, and make results based on input parameters. While the two methods do not sound that different, the difference between logic being designed (vs logic being assembled from an abstract good/bad dataset) someone abstracts the potential for assumptions and programmer error. Of course, it abstracts that potential for error into the training dataset, but that’s a whole other discussion.
The third area that AI could be useful is when you’re creating the game itself.
There’s a lot of grunt and grind work when developing a video game. Licensing prefab solutions (or commissioning someone to do a one-off asset for you) helps ease this burden, but that gets expensive in terms of both time and money. If some of those assets could be created by giving parameters to a deep-learning AI, then those are assets that you would not need to make, allowing you to focus on other assets and how they all fit together.
These are three of the use cases that Microsoft is aiming WinML at.
Sure, these are smooth curves of large details, but the antialiasing pattern looks almost perfect.
For instance, Microsoft is pointing to an NVIDIA demo where they up-sample a photo of a car, once with bilinear filtering and once with a machine learning algorithm (although not WinML-based). The bilinear algorithm behaves exactly as someone who has used Photoshop would expect. The machine learning algorithm, however, was able to identify the objects that the image intended to represent, and it drew the edges that it thought made sense.
Like their DirectX Raytracing (DXR) announcement, Microsoft plans to have PIX support WinML “on Day 1”. As for partners? They are currently working with Unity Technologies to provide WinML support in Unity’s ML-Agents plug-in. That’s all the game industry partners they have announced at the moment, though. It’ll be interesting to see who jumps in and who doesn’t over the next couple of years.
Subject: General Tech | December 6, 2017 - 12:58 PM | Jeremy Hellstrom
Tagged: amazon, google, Alexa, youtube
Google has decided that YouTube should not work as advertised on any Amazon devices, in retaliation to Amazon refusing to stream Amazon Prime Video over Google Cast nor sell Google devices online. Currently you will just be redirected to YouTube.com when you launch your app but Google is planning on blocking all access from Echo or Fire TV in the near future. None of us particularly care about Google and Amazon's relationship problems but sadly, similar to children whose parents are going through a divorce, we are the ones who suffer. These two companies have been at it for a while, The Register covers some of the highlights of their disfunctional relationship here.
"Google is trying to stop Amazon Echo Show devices from streaming YouTube videos – and from January, it will block Amazon’s Fire TVs from accessing the vid service, too."
Here is some more Tech News from around the web:
- Qualcomm’s Disruptive Technologies: 5G Gigabit LTE And The Return Of ARM-Based Windows PCs @ Techgage
- Macronix comes up with new 3D NAND structure @ Fudzilla
- Yahoo and Mozilla sue each other over Firefox Quantum's switch back to Google @ The Inquirer
- Tesla’s Gigafactory might be behind a global battery shortage @ Engadget
- Get ready for laptop-tab-smartphone threesomes from Microsoft, Lenovo, HP, Asus, Qualcomm @ The Register
Subject: Graphics Cards | October 31, 2017 - 09:58 PM | Scott Michaud
Tagged: nvidia, amazon, google, pascal, Volta, gv100, tesla v100
Remember last month? Remember when I said that Google’s introduction of Tesla P100s would be good leverage over Amazon, as the latter is still back in the Kepler days (because Maxwell was 32-bit focused)?
To compare the two parts, the Tesla P100 has 3584 CUDA cores, yielding just under 10 TFLOPs of single-precision performance. The Tesla V100, with its ridiculous die size, pushes that up over 14 TFLOPs. Same as Pascal, they also support full 1:2:4 FP64:FP32:FP16 performance scaling. It also has access to NVIDIA’s tensor cores, which are specialized for 16-bit, 4x4 multiply-add matrix operations that are apparently common in neural networks, both training and inferencing.
Amazon allows up to eight of them at once (with their P3.16xlarge instances).
So that’s cool. While Google has again been quickly leapfrogged by Amazon, it’s good to see NVIDIA getting wins in multiple cloud providers. This keeps money rolling in that will fund new chip designs for all the other segments.
Subject: General Tech | September 30, 2017 - 05:57 PM | Scott Michaud
Tagged: pc gaming, lumberyard, amazon
As we mentioned last week, Amazon has been pushing their Lumberyard fork of CryEngine into their own direction. It turns out that much of their future roadmap was actually slated for last Friday, with the release of Lumberyard 1.11.
This version replaces Crytek’s Flow Graph with Amazon’s Script Canvas visual scripting system. (Think Blueprints from Unreal Engine 4.) This lets developers design logic in a flowchart-like interface and attach it to relevant objects... building them up like blocks. Visual scripting is one area that Unity hasn’t (by default) gotten into, as they favour written scripting languages, such as C#. (Lumberyard also allows components to be written in C++ and LUA, btw.)
It also replaces Crytek’s CryAnimation, Geppetto, and Mannequin with the EMotion FX animation system from Mystic Game Development. Interestingly, this middleware was flying under the radar recently. It was popular around the 2006-2009 timeframe with titles such as Gothic 3, Warhammer Online: Age of Reckoning, and Risen. It was also intergrated into 2010’s The Lord of the Rings: Aragorn’s Quest, and that’s about it as far as we know -- a few racing games, too. I’m curious to see how development advanced over the last ten-or-so years, unless its use is more widespread than they’re allowed to announce. Regardless, they are now in Lumberyard 1.11 as their primary animation system, so people can get their hands on it and see for themselves.
If you’re interested in developing a game in Amazon Lumberyard, this release has basically all of their forward-looking systems in place. Even though a lot of features are still experimental, and the engine is still in beta, I don’t think you have to worry about being forced to develop in a system that will be deprecated at this point.
Lumberyard is free to develop on, as long as you use Amazon Web Services for online services (or you run your own servers).
Subject: General Tech | September 23, 2017 - 12:41 PM | Scott Michaud
Tagged: pc gaming, amazon
Lumberyard has been out for a little over a year and a half, and it has been experiencing steady development since then. Just recently, they published a blog post highlighting where they want the game engine to go. Pretty much none of this information is new if you’ve been following them, but it’s still interesting none-the-less.
From a high level, Amazon has been progressing their fork of CryEngine into more of a component-entity system. The concept is similar to Unity, in that you place objects in the level, then add components to them to give them the data and logic that you require. Currently, these components are mostly done in Lua and C++, but Amazon is working on a visual scripting system, like Blueprints from Unreal Engine 4, called Script Canvas. They technically inherited Flow Graph from Crytek, which I think is still technically in there, but they’ve been telling people to stop using it for a while now. I mean, this blog post explicitly states that they don’t intend to support migrating from Flow Graph to Script Canvas, so it’s a “don’t use it unless you need to ship real soon” sort of thing.
One of Lumberyard’s draws, however, is their license: free, but you can’t use this technology on any cloud hosting provider except AWS. So if you make an offline title, or you use your own servers, then you don’t need to pay Amazon a dime. That said, if you do something like leaderboards, persistent logins, or use cloud-hosted multiplayer, then you will need to do it through AWS, which, honestly, you were probably going to do anyway.
The current version is Lumberyard Beta 1.10. No release date has been set for 1.11, although they usually don’t say a word until it’s published.
Subject: General Tech | August 2, 2017 - 08:03 PM | Scott Michaud
Tagged: pc gaming, amazon
Amazon Web Services launched a new version of their Lumberyard game engine at SIGGRAPH. They advertise that the new version, Lumberyard Beta 1.10, is 50% original code from when they launched back in February 2016. The engine started as a fork of CryEngine, and I’ve watched it evolve rapidly since about November. They’re pushing the engine into sort-of an entity-component framework, similar to Unity, but with a focus on C++ and Lua. You create scripts that define some functionality, then place them on the relevant entities (versus making a hierarchy of strict subclasses like you would do in Unreal Engine 4’s C++ API).
Amazon’s visual scripting system, Script Canvas, was supposed to launch in 1.10 but I can’t see it mentioned so I’m guessing it slipped.
So what does the version have? Mostly a bunch of new rendering features. Lumberyard 1.10 adds temporal anti-aliasing and order-independent transparency. Lumberyard, because it is a deferred renderer, cannot use MSAA. The engine currently supports FXAA and SMAA, as well as supersampling of course, but 1.10 adds TAA, which blends parts of previous frames into the current one. Since the point of anti-aliasing is to know all the geometry that makes up a pixel, not just what is on top and dead center, sub-pixel variation should eventually average out to a clean image.
Order-independent transparency should be more interesting. I don’t think it’s currently available in Unreal Engine 4 or (stock) Unity 5, although I could be wrong on that, but it is noticeable for scenes with a lot of transparency. To drive the point home, NVIDIA Research made a demo in Lumberyard for GDC with glasses in a bar, embedded above. As the camera pans around the glasses, you can see the multiple reflections in the top-left side of the upside-down glass is much more stable on the left image, and where the two reflections meet in the center blends correctly.
Lumberyard 1.10 also includes a lot of editor UI tweaks, which isn’t appealing to write about but... honestly... that’s what you want in a professional content creation tool update. Their entity component tools seem to be growing nicely from the screenshots I’ve seen.
Subject: General Tech | April 21, 2017 - 07:49 PM | Scott Michaud
Tagged: twitch, pc gaming, amazon
While Twitch had quite a large lead as a streaming service, it had a fairly large gap between its regular creators and their “Twitch Partners”. If you weren’t a Twitch partner, you couldn’t directly monetize your stream, guarantee that your stream would be transcoded, and so forth.
That isn’t changing, but they are introducing an easier to obtain, middle tier that will have some, but not all, of the Partner perks. “Twitch Affiliate” is this middle-ground, and, while it is invite-only, it is open to pretty much anyone who intends to stream on a regular basis. Specifically, the threshold is about 500 online minutes in a month, spread out over at least seven days, and an average of at least three viewers at the same time; you will also need at least 50 followers. If you stream a few times per week, this is not a very high bar, but it’s still not automatic.
I should note that Twitch will only consider the previous 30 days, rolling.
The goal of this new tier is to provide some support for streamers, as they try to find their on-ramp to being a Twitch partner. At first, only the (relatively controversial) “Bits” system will be available for monetization, but other revenue streams, like video ads, should follow. Also, while you’re not guaranteed to receive video transcodes, Affiliates get priority access to whatever is left over from the Partners.
Personally, I’d like a guarantee that transcodes would be available, because I don’t want to occasionally alienate some viewers by sending Twitch too high of a bitrate for the, let’s say even just 10% of the time, that lower-quality versions would be unavailable. It still puts pressure on me to lower the quality that I send Twitch, which will often result in worse VOD quality. (I realize that you can use multiple encodes… and I currently do… but certain things, like frame rate, need to be consistent – at least with the current version of OBS Studio.)
Twitch should begin to contact eligible streamers soon, and will continue rolling in new users as they become eligible. Even then. it's not an immediate, automatic thing, though.
Subject: General Tech | February 14, 2017 - 01:39 PM | Jeremy Hellstrom
Tagged: amazon, chime, videoconferencing
If there is one thing we are short on, it is incompatible videoconferencing applications to use and support. Obviously this is why Amazon purchased Biba and has now leaped into the fray to provide Chime, a truly unique service which will transmit your voice and video over the internet in something called a conference. Sarcasm aside, Amazon Web Services have proven that they provide a solid set of services, which will be the backbone of the new app. Those who have struggled with Adobe's offering or tried to have a meeting during many of the outage periods which plague various other providers might want to take a look.
The basic service is free, Plus allows screen sharing and access to corporate directories for $2.50 per user a month and the Pro version runs $15, allowing up to 100 people in a video call as well as the all important personalized URL. Pop by Slashdot if you so desire.
"Amazon has released new service to make voice and video calls and share screen. Called Chime, the service is aimed at business users. It directly competes with well-known players such as Skype, Google Hangouts, GoToMeeting, Zoom, and Cisco's WebEx, among others."
Here is some more Tech News from around the web:
- Sugar helps make new sodium sulphur battery @ Nanotechweb
- Twitter rolls back anti-abuse tool after it's slammed for 'blinding the vulnerable' @ The Inquirer
- Microsoft Launches Outlook.com Premium Email Service, Costs $20 Per Year @ Slashdot
- Flash Nano: Germanane FET shows real promise for optoelectronics @ Nanotechweb
Subject: Displays | January 5, 2017 - 07:00 AM | Sebastian Peak
Tagged: Westinghouse, Ultra HD, UHD, tv, television, seiki, FireTV, Element, CES 2017, CES, amazon, Alexa, 4k
In a market packed with UHD TVs, a trio of budget television manufacturers have introduced new Amazon Fire TV-powered 4K televisions at CES, with new models announced from Seiki, Westinghouse, and Element. These TVs are "the world’s first 4K Ultra HD Smart TVs with Amazon Fire TV built in", with remotes supporting Alexa voice commands.
Quoting the press release, the new models from Seiki, Westinghouse, and Element will all offer the following features:
- Sizes: 43", 50", 55" and 65"
- 4K Ultra HD 3,840 by 2,160 panel resolution on all models
- The latest Amazon Fire TV user interface, including easy access to over-the-air TV programming (separate HD antenna required), simple TV input setup, and component switching
- Through the included voice remote with Alexa, customers can search for content and programming, control TV inputs and settings, and access Alexa skills to play music, get the news, check weather, sports scores, and more
- Voice remote with Alexa enabled control of smart home devices from multiple brands, including Belkin WeMo, Philips Hue, Wink, Insteon, Samsung SmartThings, Nest, TP-Link, Ecobee and more
- Access to more than 7,000 channels, games, apps and Alexa skills, including over 300,000 TV episodes and movies from Amazon Video, HBO NOW, Hulu and more
- Amazon Prime customers get unlimited access to Prime Video, featuring thousands of movies and TV episodes at no additional cost to their membership. Plus, with Amazon Channels, Prime members can now get HBO, SHOWTIME, STARZ, PBS KIDS, and over 100 more services. They only pay for the channels they want—no cable required, no additional apps to download, and easy online cancellation.
- 3 GB memory and 16 GB internal storage
- Bluetooth, Wi-Fi, and Ethernet connectivity
- Streaming resolution at 4K Ultra HD (2160p), 1080p, 720p up to 60 fps
- One-year limited warranty and great customer support
We have seen a similar idea with Roku TVs from Hisense, TCL, and others, as budget TV makers look to differentiate themselves; and the integration of the popular Amazon Fire TV for the OS may help position Seiki and company more favorably. Hopefully improvements in backlighting tech and UHD panel production cost reductions will result in a "trickle-down" effect for better picture quality for TVs selected on cost alone, but for now improved user interface design can go a long way in making these budget TVs pleasant to use.
Follow all of our coverage of the show at https://pcper.com/ces!
Subject: Storage | December 8, 2016 - 05:59 PM | Tim Verry
Tagged: Seagate, external hard drive, cloud storage, cloud backup, amazon drive, amazon
Seagate and Amazon have partnered up to offer a new USB external hard drive called the Seagate Duet that, while functioning as you would expect an external drive to, also automatically keeps files synced between itself and the user's Amazon Drive cloud storage. The Duet is based on Seagate's Backup Plus drive series and is a 1TB drive with two platters and PMR (perpendicular magnetic recording) technology that spins at 5400 RPM. It connects to PCs over USB 3.0.
During the initial setup, users provide their Amazon Drive login to the Duet software which will upload all media files stored on the external drive to Amazon Drive as well as download any files stored on Amazon Drive regardless of whether they were uploaded by the Duet or other devices not using the Duet software.
Seagate offers a two year warranty on the drive which will be an Amazon.com exclusive and available on December 10th for $99.99. The Duet does come at quite the premium over other drives (even Seagate's own) with non-automatic cloud syncing 1TB USB 3.0 drives coming in at around $50 and 2TB drives able to be found easily for less than the Duet's $100 price.
However, there is a bit of a saving grace in that the Seagate Duet does come with one year of free Amazon Drive Unlimited storage which normally costs $59.99 a year.
For enthusiasts, there are cheaper 1TB or higher capacity drives for the same price as the Duet, but I find myself thinking that this would be a great gift for family members to help them protect their precious family photos and videos from a drive failure or lost drive! With the holidays coming up fast, if you have not figured out the perfect gift yet this may just be the thing to buy – and if something does happen, the real gift is that their photos are safely backed up!