Build 2014: .NET Foundation Announced with Open Source

Subject: General Tech, Shows and Expos | April 4, 2014 - 03:42 AM |
Tagged: BUILD 2014, microsoft, .net

Microsoft has announced the creation of the .NET Foundation along with the open source release of several .NET frameworks and languages. This comes a day after the simultaneous unveiling and open sourcing of WinJS, a JavaScript library which brings "Modern"-like interface elements to websites (and web apps). While building block APIs are common, this could help Microsoft's design paradigms gain traction with apps from other platforms.

microsoft-dotnet-foundation.png

.NET has been very popular since its initial release. I saw it used frequently in applications, particularly when a simple form-like interface is required. It was easy to develop and accessible from several languages, such as C++, C#, and VB.NET. Enterprise application developers were particularly interested in it, especially with its managed security.

The framework drove an open source movement to write their own version, Mono, spearheaded by Novell. Some time later, the company Xamarin was created from the original Mono development team and maintains the project to this day. In fact, Miguel de Icaza was at Build 2014 discussing the initiative. He seems content with Microsoft's new Roslyn compiler and the working relationship between the two companies as a whole.

WinJS is released under the very permissive Apache 2.0 license. Other code, such as Windows Phone Toolkit, are released under other licenses, such as the Microsoft Public License (Ms-PL). Pay attention to any given project's license. It would not be wise to assume. Still, it sounds like a good step.

Source: ZDNet

Build 2014: Microsoft Presents New Start Menu

Subject: General Tech, Shows and Expos | April 2, 2014 - 09:53 PM |
Tagged: BUILD 2014, microsoft, windows, start menu

Microsoft had numerous announcements during their Build 2014 opening keynote, which makes sense as they needed to fill the three hours that they assigned for it. In this post, I will focus on the upcoming changes to the Windows desktop experience. Two, albeit related, features were highlighted: the ability to run Modern Apps in a desktop window, and the corresponding return of the Start Menu.

I must say, the way that they grafted Start Screen tiles on the Start Menu is pretty slick. The Start Menu, since Windows Vista, has felt awkward with its split between recently used applications and common shortcuts in a breakout on the right with an expanded "All Programs" submenu handle on the bottom. It is functional, and it works perfectly fine, but something just felt weird about it. This looks a lot cleaner, in my opinion, especially since its width is variable according to how many applications are pinned.

Of course, my major complaint with Windows 8.x has nothing to do with the interface. There has not been any discussion around sideloading applications to get around Windows Store certification requirements. This is a major concern for browser vendors and should be one for many others, from hobbyists who might want to share their creations with one or two friends or family members, rather than everyone in an entire Windows Store region, or citizens of countries whose governments might pressure Microsoft to ban encryption or security applications.

That said, there is a session tomorrow called "Deploying and Managing Enterprise Apps", discussing changes app sideloading in Windows 8.1. Enterprise users are already allowed sideloading certificates from Microsoft. Maybe it will be expanded? I am not holding my breath.

Keep an eye out, because there should be a lot of news over the next couple of days.

Source: ZDNet

GDC 2014: Shader-limited Optimization for AMD's GCN

Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM |
Tagged: gdc 14, GDC, GCN, amd

While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.

amd-gcn-unit.jpg

Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.

AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.

Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.

I know I learned.

As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.

This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.

GDC 14: NVIDIA, AMD, and Intel Discuss OpenGL Speed-ups

Subject: General Tech, Shows and Expos | March 22, 2014 - 01:41 AM |
Tagged: opengl, nvidia, Intel, gdc 14, GDC, amd

So, for all the discussion about DirectX 12, the three main desktop GPU vendors, NVIDIA, AMD, and Intel, want to tell OpenGL developers how to tune their applications. Using OpenGL 4.2 and a few cross-vendor extensions, because OpenGL is all about its extensions, a handful of known tricks can reduce driver overhead up to ten-fold and increase performance up to fifteen-fold. The talk is very graphics developer-centric, but it basically describes a series of tricks known to accomplish feats similar to what Mantle and DirectX 12 suggest.

opengl_logo.jpg

The 130-slide presentation is broken into a few sections, each GPU vendor getting a decent chunk of time. On occasion, they would mention which implementation fairs better with one function call. The main point that they wanted to drive home (since they clearly repeated the slide three times with three different fonts) is that none of this requires a new API. Everything exists and can be implemented right now. The real trick is to know how to not poke the graphics library in the wrong way.

The page also hosts a keynote from the recent Steam Dev Days.

That said, an advantage that I expect from DirectX 12 and Mantle is reduced driver complexity. Since the processors have settled into standards, I expect that drivers will not need to do as much unless the library demands it for legacy reasons. I am not sure how extending OpenGL will affect that benefit, as opposed to just isolating the legacy and building on a solid foundation, but I wonder if these extensions could be just as easy to maintain and optimize. Maybe it is.

Either way, the performance figures do not lie.

Source: NVIDIA

Oculus Rift Development Kit 2 (DK2) Are $350, Expected July

Subject: General Tech, Displays, Shows and Expos | March 22, 2014 - 01:04 AM |
Tagged: oculus rift, Oculus, gdc 14, GDC

Last month, we published a news piece stating that Oculus Rift production has been suspended as "certain components" were unavailable. At the time, the company said they are looking for alternate suppliers but do not know how long that will take. The speculation was that the company was simply readying a new version and did not want to cannibalize their sales.

This week, they announced a new version which is available for pre-order and expected to ship in July.

DK2, as it is called, integrates a pair of 960x1080 OLED displays (correction, March 22nd @ 3:15pm: It is technically a single 1080p display that is divided per eye) for higher resolution and lower persistence. Citing Valve's VR research, they claim that the low persistence will reduce motion blur as your eye blends neighboring frames together. In this design, it flickers the image for a short period before going black, and does this at a high enough rate keep your eye fed with light. The higher resolution also prevents the "screen door effect" complained about by the first release. Like their "Crystal Cove" prototype, it also uses an external camera to reduce latency in detecting your movement. All of these should combine to less motion sickness.

I would expect that VR has a long road ahead of it before it becomes a commercial product for the general population, though. There are many legitimate concerns about leaving your users trapped in a sensory deprivation apparatus when Kinect could not even go a couple of days without someone pretending to play volleyball and wrecking their TV with ceiling fan fragments. Still, this company seems to be doing it intelligently: keep afloat on developers and lead users as you work through your prototypes. It is cool, even if it will get significantly better, and people will support its research while getting the best at the time.

DK2 is available for pre-order for $350 and is expected to ship in July.

Source: Oculus

GDC 14: Unreal Engine 4 Launches with Radical Changes

Subject: General Tech, Shows and Expos | March 19, 2014 - 08:15 PM |
Tagged: unreal engine 4, gdc 14, GDC, epic games

Game developers, from indie to the gigantic, can now access Unreal Engine 4 with a $19/month subscription (plus 5% of revenue from resulting sales). This is a much different model from UDK, which was free to develop games with their precompiled builds until commercial release, where an upfront fee and 25% royalty is then applied. For Unreal Engine 4, however, this $19 monthly fee also gives you full C++ source code access (which I have wondered about since the announcement that Unrealscript no longer exists).

Of course, the Unreal Engine 3-based UDK is still available (and just recently updated).

This is definitely interesting and, I believe, a response to publishers doubling-down on developing their own engines. EA has basically sworn off engines outside of their own Frostbite and Ingite technologies. Ubisoft has only announced or released three games based on Unreal Engine since 2011; Activision has announced or released seven in that time, three of which were in that first year. Epic Games has always been very friendly to smaller developers and, with the rise of the internet, it is becoming much easier for indie developers to release content through Steam or even their own website. These developers now have a "AAA" engine, which I think almost anyone would agree that Unreal Engine 4 is, with an affordable license (and full source access).

Speaking of full source access, licensees can access the engine at Epic's GitHub. While a top-five publisher might hesitate to share fixes and patches, the army of smaller developers might share and share-alike. This could lead to Unreal Engine 4 acquiring its own features rapidly. Epic highlights their Oculus VR, Linux and Steam OS, and native HTML5 initiatives but, given community support, there could be pushes into unofficial support for Mantle, TrueAudio, or other technologies. Who knows?

A sister announcement, albeit a much smaller one, is that Unreal Engine 4 is now part of NVIDIA's GameWorks initiative. This integrates various NVIDIA SDKs, such as PhysX, into the engine. The press release quote from Tim Sweeney is as follows:

Epic developed Unreal Engine 4 on NVIDIA hardware, and it looks and runs best on GeForce.

Another brief mention is that Unreal Engine 4 will have expanded support for Android.

So, if you are a game developer, check out the official Epic Games blog post at their website. You can also check their Youtube page for various videos, many of which were released today.

Source: Epic Games

GDC 14: CRYENGINE To Support Mantle, AMD Gets Another Endorsement

Subject: General Tech, Shows and Expos | March 19, 2014 - 05:00 PM |
Tagged: Mantle, gdc 14, GDC, crytek, CRYENGINE

While I do not have too many details otherwise, Crytek and AMD have announced that mainline CRYENGINE will support the Mantle graphics API. CRYENGINE, by Crytek, now sits alongside Frostbite, by Dice, and Nitrous, by Oxide Games, as engines which support that alternative to DirectX and OpenGL. This comes little more than a week after their announcement of native Linux support with their popular engine.

Crysis2.jpg

The tape has separate draw calls!

Crytek has been "evaluating" the API for quite some time now, showing interest back at the AMD Developer Summit. Since then, they have apparently made a clear decision on it. It is also not the first time that CRYENGINE has been publicly introduced to Mantle, with Chris Robert's Star Citizen, also powered by the 4th Generation CRYENGINE, having announced support for the graphics API. Of course, there is a large gap between having a licensee do legwork to include an API and having the engine developer provide you supported builds (that would be like saying UnrealEngine 3 supports the original Wii).

Hopefully we will learn more as GDC continues.

Editor's (Ryan) Take:

As the week at GDC has gone on, AMD continues to push forward with Mantle and calls Crytek's implementation of the low level API "a huge endorsement" of the company's direction and vision for the future. Many, including myself, have considered that the pending announcement of DX12 would be a major set back for Mantle but AMD claims that is "short sited" and as more developers come into the Mantle ecosystem it is proof AMD is doing the "right thing."  

Here at GDC, AMD told us they have expanded the number of beta Mantle members dramatically with plenty more applications (dozens) in waiting.  Obviously this could put a lot of strain on AMD for Mantle support and maintenance but representatives assure us that the major work of building out documentation and development tools is nearly 100% behind them.

mantle.jpg

If stories like this one over at Semiaccurate are true, and that Microsoft's DirectX 12 will be nearly identical to AMD Mantle, then it makes sense that developers serious about new gaming engines can get a leg up on projects by learning Mantle today. Applying that knowledge to the DX12 API upon its release could speed up development and improve implementation efficiency. From what I am hearing from the few developers willing to even mention DX12, Mantle is much further along in its release (late beta) than DX12 is (early alpha).

AMD indeed was talking with and sharing the development of Mantle with Microsoft "every step of the way" and AMD has stated on several occasions that there were two outcomes with Mantle; it either becomes or inspires a new industry standard in game development. Even if DX12 is more or less a carbon copy of Mantle, forcing NVIDIA to implement that API style with DX12's release, AMD could potentially have the advantage of gaming performance and support between now and Microsoft's DirectX release.  That could be as much as a full calendar year from reports we are getting at GDC.  

Ray Tracing is back? That's Wizard!

Subject: General Tech, Shows and Expos | March 19, 2014 - 01:20 PM |
Tagged: Imagination Technologies, gdc 14, wizard, ray tracing

The Tech Report visited Imagination Technologies' booth at GDC where they were showing off a new processor, the Wizard GPU.  It is based on the PowerVR Series6XT Rogue graphics processor which is specifically designed to accelerate ray tracing performance, a topic we haven't heard much about lately.  They describe the performance as capable of processing 300 million rays and 100 million dynamic triangles per second which translates to 7 to 10 rays per pixel at 720p and 30Hz or 3 to 5 rays a pixel at 1080p and 30Hz.  That is not bad, though Imagination Technologies estimates movies display at a rate of 16 to 32 rays per pixel so it may be a while before we see a Ray Tracing slider under Advanced Graphics Options.

4_PowerVR Ray Tracing - hybrid rendering (4).jpg

"When we visited Imagination Technologies at CES, they were showing off some intriguing hardware that augments their GPUs in order to accelerate ray-traced rendering. Ray tracing is a well-known and high-quality form of rendering that relies on the physical simulation of light rays bouncing around in a scene. Although it's been used in movies and in static scene creation, ray tracing has generally been too computationally intensive to be practical for real-time graphics and gaming. However, Imagination Tech is looking to bring ray-tracing to real-time graphics—in the mobile GPU space, no less—with its new family of Wizard GPUs."

Here is some more Tech News from around the web:

Tech Talk

GDC 14: WebCL 1.0 Specification is Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:03 AM |
Tagged: WebCL, gdc 14, GDC

The Khronos Group has just ratified the standard for WebCL 1.0. The API is expected to provide a massive performance boost to web applications which are dominated by expensive functions which can be offloaded to parallel processors, such as GPUs and multi-core CPUs. Its definition also allows WebCL to communicate and share buffers between it and WebGL with an extension.

WebCL_300_167_75.png

Frequent readers of the site might remember that I have a particular interest in WebCL. Based on OpenCL, it allows web apps to obtain a list of every available compute device and target it for workloads. I have personally executed tasks on an NVIDIA GeForce 670 discrete GPU and other jobs on my Intel HD 4000 iGPU, at the same time, using the WebCL prototype from Tomi Aarnio of Nokia Research. The same is true for users with multiple discrete GPUs installed in their system (even if they are not compatible with Crossfire, SLi, or are from different vendors altogether). This could be very useful for physics, AI, lighting, and other game middleware packages.

Still, browser adoption might be rocky for quite some time. Google, Mozilla, and Opera Software were each involved in the working draft. This leaves both Apple and Microsoft notably absent. Even then, I am not sure how much interest exists within Google, Mozilla, and Opera to take it from a specification to a working feature in their browsers. Some individuals have expressed more faith in WebGL compute shaders than WebCL.

Of course, that can change with just a single "killer app", library, or middleware.

I do expect some resistance from the platform holders, however. Even Google has been pushing back on OpenCL support in Android, in favor of their "Renderscript" abstraction. The performance of a graphics processor is also significant leverage for a native app. There is little, otherwise, that cannot be accomplished with Web standards except a web browser itself (and there is even some non-serious projects for that). If Microsoft can support WebGL, however, there is always hope.

The specification is available at the Khronos website.

Source: Khronos

GDC 14: EGL 1.5 Specification Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:02 AM |
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL

The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.

khronos-EGL_500_123_75.png

The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.

During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.

The EGL 1.5 spec is available at the Khronos website.

Source: Khronos