NAB 2014: Thunderbolt Networking Announced for Windows

Subject: General Tech, Networking, Systems, Shows and Expos | April 8, 2014 - 03:26 PM |
Tagged: NAB, NAB 14, Thunderbolt 2, thunderbolt

Video professionals are still interested in Thunderbolt in probably much the same way as Firewire needed to be pried from their cold, dead hands. It is a very high bandwidth connector, useful for sending and receiving 4K video. Also, it was originally exclusive to Apple so you can guess which industries were first-adopters. Intel has focused their Thunderbolt announcements on the National Association of Broadcasters (NAB) show. This year, Thunderbolt Networking will be available for Windows via a driver. This will allow any combination of Macs and Windows PCs to be paired together by a 10 Gigabit network.

Intel-Thunderbolt2-Networking.jpg

Of course, this is not going to be something that you can plug into a router. This is a point-to-point network for sharing files between two devices... really fast. Perhaps one use case would be a workstation with a Mac and a Windows PC on a KVM switch. If both are connected with Thunderbolt 2, they could share the same storage pool.

While this feature already exists on Apple devices, the PC driver will be available... "soon".

Source: Intel

BUILD 2014: Windows Sideloading Changes Announced

Subject: General Tech, Systems, Shows and Expos | April 8, 2014 - 01:11 AM |
Tagged: BUILD 2014, microsoft, windows, winRT

A few days ago, I reported on the news from BUILD 2014 that Windows would see the return of the Start Menu and windowed apps. These features, which are not included with today's Windows 8.1 Update 1, will come in a later version. While I found these interface changes interesting, I reiterated that the user interface was not my concern: Windows Store certification was. I did leave room for a little hope, however, because Microsoft scheduled an announcement of changes. It was focused on enterprise customers, so I did not hold my breath.

And some things did change... but not enough for the non-enterprise user.

tiles2.jpg

Microsoft is still hanging on to the curation of apps, except for "domain-joined" x86 Enterprise and x86 Pro PCs; RT devices and "not domain-joined" computers will only allow sideloaded apps with a key. This certificate (key) is not free for everyone. Of course, this does not have anything to do with native x86 applications. Thankfully, the prospect of WinRT APIs eventually replacing Win32, completely, seems less likely now. It could still be possible if Windows Store has a major surge in popularity but, as it stands right now, Microsoft seems to be spending less effort containing x86 for an eventual lobotomy.

If it does happen, it would be a concern for a variety of reasons:

  1. Governments, foreign or domestic, who pressure Microsoft to ban encryption software.

  2. Internet Explorer's Trident would have no competition to adopt new web standards.

  3. Cannot create an app for just a friend or family member (unless it's a web app in IE).

  4. When you build censorship, the crazies will come with demands to abuse it.

So I am still concerned about the future of Windows. I am still not willing to believe that Microsoft will support x86-exclusive applications until the end of time. If that happens, and sideloading is not publicly available, and web standards are forced into stagnation by a lack of alternative web browsers, then I can see bad times ahead. I will not really feel comfortable until a definitive pledge to allow users to control what can go on their device, even if Microsoft (or people with some form of authority over them) dislikes it, is made.

But I know that many disagree with me. What are your thoughts? Comment away!

Source: ZDNet

Build 2014: .NET Foundation Announced with Open Source

Subject: General Tech, Shows and Expos | April 4, 2014 - 03:42 AM |
Tagged: BUILD 2014, microsoft, .net

Microsoft has announced the creation of the .NET Foundation along with the open source release of several .NET frameworks and languages. This comes a day after the simultaneous unveiling and open sourcing of WinJS, a JavaScript library which brings "Modern"-like interface elements to websites (and web apps). While building block APIs are common, this could help Microsoft's design paradigms gain traction with apps from other platforms.

microsoft-dotnet-foundation.png

.NET has been very popular since its initial release. I saw it used frequently in applications, particularly when a simple form-like interface is required. It was easy to develop and accessible from several languages, such as C++, C#, and VB.NET. Enterprise application developers were particularly interested in it, especially with its managed security.

The framework drove an open source movement to write their own version, Mono, spearheaded by Novell. Some time later, the company Xamarin was created from the original Mono development team and maintains the project to this day. In fact, Miguel de Icaza was at Build 2014 discussing the initiative. He seems content with Microsoft's new Roslyn compiler and the working relationship between the two companies as a whole.

WinJS is released under the very permissive Apache 2.0 license. Other code, such as Windows Phone Toolkit, are released under other licenses, such as the Microsoft Public License (Ms-PL). Pay attention to any given project's license. It would not be wise to assume. Still, it sounds like a good step.

Source: ZDNet

Build 2014: Microsoft Presents New Start Menu

Subject: General Tech, Shows and Expos | April 2, 2014 - 09:53 PM |
Tagged: BUILD 2014, microsoft, windows, start menu

Microsoft had numerous announcements during their Build 2014 opening keynote, which makes sense as they needed to fill the three hours that they assigned for it. In this post, I will focus on the upcoming changes to the Windows desktop experience. Two, albeit related, features were highlighted: the ability to run Modern Apps in a desktop window, and the corresponding return of the Start Menu.

I must say, the way that they grafted Start Screen tiles on the Start Menu is pretty slick. The Start Menu, since Windows Vista, has felt awkward with its split between recently used applications and common shortcuts in a breakout on the right with an expanded "All Programs" submenu handle on the bottom. It is functional, and it works perfectly fine, but something just felt weird about it. This looks a lot cleaner, in my opinion, especially since its width is variable according to how many applications are pinned.

Of course, my major complaint with Windows 8.x has nothing to do with the interface. There has not been any discussion around sideloading applications to get around Windows Store certification requirements. This is a major concern for browser vendors and should be one for many others, from hobbyists who might want to share their creations with one or two friends or family members, rather than everyone in an entire Windows Store region, or citizens of countries whose governments might pressure Microsoft to ban encryption or security applications.

That said, there is a session tomorrow called "Deploying and Managing Enterprise Apps", discussing changes app sideloading in Windows 8.1. Enterprise users are already allowed sideloading certificates from Microsoft. Maybe it will be expanded? I am not holding my breath.

Keep an eye out, because there should be a lot of news over the next couple of days.

Source: ZDNet

GDC 2014: Shader-limited Optimization for AMD's GCN

Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM |
Tagged: gdc 14, GDC, GCN, amd

While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.

amd-gcn-unit.jpg

Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.

AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.

Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.

I know I learned.

As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.

This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.

GDC 14: NVIDIA, AMD, and Intel Discuss OpenGL Speed-ups

Subject: General Tech, Shows and Expos | March 22, 2014 - 01:41 AM |
Tagged: opengl, nvidia, Intel, gdc 14, GDC, amd

So, for all the discussion about DirectX 12, the three main desktop GPU vendors, NVIDIA, AMD, and Intel, want to tell OpenGL developers how to tune their applications. Using OpenGL 4.2 and a few cross-vendor extensions, because OpenGL is all about its extensions, a handful of known tricks can reduce driver overhead up to ten-fold and increase performance up to fifteen-fold. The talk is very graphics developer-centric, but it basically describes a series of tricks known to accomplish feats similar to what Mantle and DirectX 12 suggest.

opengl_logo.jpg

The 130-slide presentation is broken into a few sections, each GPU vendor getting a decent chunk of time. On occasion, they would mention which implementation fairs better with one function call. The main point that they wanted to drive home (since they clearly repeated the slide three times with three different fonts) is that none of this requires a new API. Everything exists and can be implemented right now. The real trick is to know how to not poke the graphics library in the wrong way.

The page also hosts a keynote from the recent Steam Dev Days.

That said, an advantage that I expect from DirectX 12 and Mantle is reduced driver complexity. Since the processors have settled into standards, I expect that drivers will not need to do as much unless the library demands it for legacy reasons. I am not sure how extending OpenGL will affect that benefit, as opposed to just isolating the legacy and building on a solid foundation, but I wonder if these extensions could be just as easy to maintain and optimize. Maybe it is.

Either way, the performance figures do not lie.

Source: NVIDIA

Oculus Rift Development Kit 2 (DK2) Are $350, Expected July

Subject: General Tech, Displays, Shows and Expos | March 22, 2014 - 01:04 AM |
Tagged: oculus rift, Oculus, gdc 14, GDC

Last month, we published a news piece stating that Oculus Rift production has been suspended as "certain components" were unavailable. At the time, the company said they are looking for alternate suppliers but do not know how long that will take. The speculation was that the company was simply readying a new version and did not want to cannibalize their sales.

This week, they announced a new version which is available for pre-order and expected to ship in July.

DK2, as it is called, integrates a pair of 960x1080 OLED displays (correction, March 22nd @ 3:15pm: It is technically a single 1080p display that is divided per eye) for higher resolution and lower persistence. Citing Valve's VR research, they claim that the low persistence will reduce motion blur as your eye blends neighboring frames together. In this design, it flickers the image for a short period before going black, and does this at a high enough rate keep your eye fed with light. The higher resolution also prevents the "screen door effect" complained about by the first release. Like their "Crystal Cove" prototype, it also uses an external camera to reduce latency in detecting your movement. All of these should combine to less motion sickness.

I would expect that VR has a long road ahead of it before it becomes a commercial product for the general population, though. There are many legitimate concerns about leaving your users trapped in a sensory deprivation apparatus when Kinect could not even go a couple of days without someone pretending to play volleyball and wrecking their TV with ceiling fan fragments. Still, this company seems to be doing it intelligently: keep afloat on developers and lead users as you work through your prototypes. It is cool, even if it will get significantly better, and people will support its research while getting the best at the time.

DK2 is available for pre-order for $350 and is expected to ship in July.

Source: Oculus

GDC 14: Unreal Engine 4 Launches with Radical Changes

Subject: General Tech, Shows and Expos | March 19, 2014 - 08:15 PM |
Tagged: unreal engine 4, gdc 14, GDC, epic games

Game developers, from indie to the gigantic, can now access Unreal Engine 4 with a $19/month subscription (plus 5% of revenue from resulting sales). This is a much different model from UDK, which was free to develop games with their precompiled builds until commercial release, where an upfront fee and 25% royalty is then applied. For Unreal Engine 4, however, this $19 monthly fee also gives you full C++ source code access (which I have wondered about since the announcement that Unrealscript no longer exists).

Of course, the Unreal Engine 3-based UDK is still available (and just recently updated).

This is definitely interesting and, I believe, a response to publishers doubling-down on developing their own engines. EA has basically sworn off engines outside of their own Frostbite and Ingite technologies. Ubisoft has only announced or released three games based on Unreal Engine since 2011; Activision has announced or released seven in that time, three of which were in that first year. Epic Games has always been very friendly to smaller developers and, with the rise of the internet, it is becoming much easier for indie developers to release content through Steam or even their own website. These developers now have a "AAA" engine, which I think almost anyone would agree that Unreal Engine 4 is, with an affordable license (and full source access).

Speaking of full source access, licensees can access the engine at Epic's GitHub. While a top-five publisher might hesitate to share fixes and patches, the army of smaller developers might share and share-alike. This could lead to Unreal Engine 4 acquiring its own features rapidly. Epic highlights their Oculus VR, Linux and Steam OS, and native HTML5 initiatives but, given community support, there could be pushes into unofficial support for Mantle, TrueAudio, or other technologies. Who knows?

A sister announcement, albeit a much smaller one, is that Unreal Engine 4 is now part of NVIDIA's GameWorks initiative. This integrates various NVIDIA SDKs, such as PhysX, into the engine. The press release quote from Tim Sweeney is as follows:

Epic developed Unreal Engine 4 on NVIDIA hardware, and it looks and runs best on GeForce.

Another brief mention is that Unreal Engine 4 will have expanded support for Android.

So, if you are a game developer, check out the official Epic Games blog post at their website. You can also check their Youtube page for various videos, many of which were released today.

Source: Epic Games

GDC 14: CRYENGINE To Support Mantle, AMD Gets Another Endorsement

Subject: General Tech, Shows and Expos | March 19, 2014 - 05:00 PM |
Tagged: Mantle, gdc 14, GDC, crytek, CRYENGINE

While I do not have too many details otherwise, Crytek and AMD have announced that mainline CRYENGINE will support the Mantle graphics API. CRYENGINE, by Crytek, now sits alongside Frostbite, by Dice, and Nitrous, by Oxide Games, as engines which support that alternative to DirectX and OpenGL. This comes little more than a week after their announcement of native Linux support with their popular engine.

Crysis2.jpg

The tape has separate draw calls!

Crytek has been "evaluating" the API for quite some time now, showing interest back at the AMD Developer Summit. Since then, they have apparently made a clear decision on it. It is also not the first time that CRYENGINE has been publicly introduced to Mantle, with Chris Robert's Star Citizen, also powered by the 4th Generation CRYENGINE, having announced support for the graphics API. Of course, there is a large gap between having a licensee do legwork to include an API and having the engine developer provide you supported builds (that would be like saying UnrealEngine 3 supports the original Wii).

Hopefully we will learn more as GDC continues.

Editor's (Ryan) Take:

As the week at GDC has gone on, AMD continues to push forward with Mantle and calls Crytek's implementation of the low level API "a huge endorsement" of the company's direction and vision for the future. Many, including myself, have considered that the pending announcement of DX12 would be a major set back for Mantle but AMD claims that is "short sited" and as more developers come into the Mantle ecosystem it is proof AMD is doing the "right thing."  

Here at GDC, AMD told us they have expanded the number of beta Mantle members dramatically with plenty more applications (dozens) in waiting.  Obviously this could put a lot of strain on AMD for Mantle support and maintenance but representatives assure us that the major work of building out documentation and development tools is nearly 100% behind them.

mantle.jpg

If stories like this one over at Semiaccurate are true, and that Microsoft's DirectX 12 will be nearly identical to AMD Mantle, then it makes sense that developers serious about new gaming engines can get a leg up on projects by learning Mantle today. Applying that knowledge to the DX12 API upon its release could speed up development and improve implementation efficiency. From what I am hearing from the few developers willing to even mention DX12, Mantle is much further along in its release (late beta) than DX12 is (early alpha).

AMD indeed was talking with and sharing the development of Mantle with Microsoft "every step of the way" and AMD has stated on several occasions that there were two outcomes with Mantle; it either becomes or inspires a new industry standard in game development. Even if DX12 is more or less a carbon copy of Mantle, forcing NVIDIA to implement that API style with DX12's release, AMD could potentially have the advantage of gaming performance and support between now and Microsoft's DirectX release.  That could be as much as a full calendar year from reports we are getting at GDC.  

Ray Tracing is back? That's Wizard!

Subject: General Tech, Shows and Expos | March 19, 2014 - 01:20 PM |
Tagged: Imagination Technologies, gdc 14, wizard, ray tracing

The Tech Report visited Imagination Technologies' booth at GDC where they were showing off a new processor, the Wizard GPU.  It is based on the PowerVR Series6XT Rogue graphics processor which is specifically designed to accelerate ray tracing performance, a topic we haven't heard much about lately.  They describe the performance as capable of processing 300 million rays and 100 million dynamic triangles per second which translates to 7 to 10 rays per pixel at 720p and 30Hz or 3 to 5 rays a pixel at 1080p and 30Hz.  That is not bad, though Imagination Technologies estimates movies display at a rate of 16 to 32 rays per pixel so it may be a while before we see a Ray Tracing slider under Advanced Graphics Options.

4_PowerVR Ray Tracing - hybrid rendering (4).jpg

"When we visited Imagination Technologies at CES, they were showing off some intriguing hardware that augments their GPUs in order to accelerate ray-traced rendering. Ray tracing is a well-known and high-quality form of rendering that relies on the physical simulation of light rays bouncing around in a scene. Although it's been used in movies and in static scene creation, ray tracing has generally been too computationally intensive to be practical for real-time graphics and gaming. However, Imagination Tech is looking to bring ray-tracing to real-time graphics—in the mobile GPU space, no less—with its new family of Wizard GPUs."

Here is some more Tech News from around the web:

Tech Talk

GDC 14: WebCL 1.0 Specification is Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:03 AM |
Tagged: WebCL, gdc 14, GDC

The Khronos Group has just ratified the standard for WebCL 1.0. The API is expected to provide a massive performance boost to web applications which are dominated by expensive functions which can be offloaded to parallel processors, such as GPUs and multi-core CPUs. Its definition also allows WebCL to communicate and share buffers between it and WebGL with an extension.

WebCL_300_167_75.png

Frequent readers of the site might remember that I have a particular interest in WebCL. Based on OpenCL, it allows web apps to obtain a list of every available compute device and target it for workloads. I have personally executed tasks on an NVIDIA GeForce 670 discrete GPU and other jobs on my Intel HD 4000 iGPU, at the same time, using the WebCL prototype from Tomi Aarnio of Nokia Research. The same is true for users with multiple discrete GPUs installed in their system (even if they are not compatible with Crossfire, SLi, or are from different vendors altogether). This could be very useful for physics, AI, lighting, and other game middleware packages.

Still, browser adoption might be rocky for quite some time. Google, Mozilla, and Opera Software were each involved in the working draft. This leaves both Apple and Microsoft notably absent. Even then, I am not sure how much interest exists within Google, Mozilla, and Opera to take it from a specification to a working feature in their browsers. Some individuals have expressed more faith in WebGL compute shaders than WebCL.

Of course, that can change with just a single "killer app", library, or middleware.

I do expect some resistance from the platform holders, however. Even Google has been pushing back on OpenCL support in Android, in favor of their "Renderscript" abstraction. The performance of a graphics processor is also significant leverage for a native app. There is little, otherwise, that cannot be accomplished with Web standards except a web browser itself (and there is even some non-serious projects for that). If Microsoft can support WebGL, however, there is always hope.

The specification is available at the Khronos website.

Source: Khronos

GDC 14: EGL 1.5 Specification Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:02 AM |
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL

The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.

khronos-EGL_500_123_75.png

The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.

During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.

The EGL 1.5 spec is available at the Khronos website.

Source: Khronos

GDC 14: SYCL 1.2 Provisional Spec Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:01 AM |
Tagged: SYCL, opencl, gdc 14, GDC

To gather community feedback, the provisional specification for SYCL 1.2 has been released by The Khronos Group. SYCL extends itself upon OpenCL with the C++11 standard. This technology is built on another Khronos platform, SPIR, which allows the OpenCL C programming language to be mapped onto LLVM, with its hundreds of compatible languages (and Khronos is careful to note that they intend for anyone to make their own compatible alternative langauge).

khronos-SYCL_Color_Mar14_154_75.png

In short, SPIR allows many languages which can compile into LLVM to take advantage of OpenCL. SYCL is the specification for creating C++11 libraries and compilers through SPIR.

As stated earlier, Khronos wants anyone to make their own compatible language:

While SYCL is one possible solution for developers, the OpenCL group encourages innovation in programming models for heterogeneous systems, either by building on top of the SPIR™ low-level intermediate representation, leveraging C++ programming techniques through SYCL, using the open source CLU libraries for prototyping, or by developing their own techniques.

SYCL 1.2 supports OpenCL 1.2 and they intend to develop it alongside OpenCL. Future releases are expected to support the latest OpenCL 2.0 specification and keep up with future developments.

The SYCL 1.2 provisional spec is available at the Khronos website.

Source: Khronos

OCZ Partners With AMD at GDC to Showcase Solid-State Supercharged Systems for Gaming Professionals

Subject: General Tech, Shows and Expos | March 18, 2014 - 12:38 PM |
Tagged: gdc 14, amd, ocz, Vector 150

If you make it to the Game Developers Conference this year make sure to pay a visit to the AMD booth where you can get a look at OCZ's Vector 150 drives in action.  They aim to show that these drives are not only good for the gamer, they are good for the game designer as well.

unnamed.jpg

OCZ Vector 150 SSDs on Display at AMD Booth #1024, March 17-21 in San Francisco, CA

gdc.jpg

SAN JOSE, CA - March 17, 2014 - OCZ Storage Solutions - a Toshiba Group Company and leading provider of high-performance solid state drives (SSDs) for computing devices and systems, today announced its partnership with AMD to showcase the power of high performance technology at the Game Developer Conference (GDC) March 17-21 at the Moscone Center in San Francisco, CA. AMD's demo systems will feature best-in-class Vector 150 Series solid state drives demonstrating how developers can enhance productivity and efficiency in their work.

"We are excited to partner with AMD for the upcoming Game Developers Conference to support the fast growing interactive game development industry," said Alex Mei, CMO for OCZ Storage Solutions. "OCZ is dedicated to delivering premium solid state storage solutions that are not only a useful tool for developers, but also meet the unique demands of enthusiasts and gamers on all levels."

"Our presence at the 2014 Game Developer Conference will feature a number of high-performance gaming systems running 24/7 in harsh conditions," said Darren McPhee, director of product marketing, Graphics Business Unit, AMD. "We knew that OCZ Vector SSDs were uniquely ready to meet the reliability requirements of our gaming installations. Between the high performance graphics of AMD Radeon™ GPUs and the fast load times of OCZ Vector SSDs, visitors to AMD's booth in the South Hall are in for a great gaming experience!"

GDC is the world's largest game industry event, attracting over 23,000 professionals including programmers, artists, producers, designers, audio professionals, business decision-makers, and other digital gaming industry authorities. OCZ's premium Vector 150 Series, designed for workstation users along with bleeding-edge enthusiasts, will be in AMD systems that promote improved CPU and GPU performance, enhanced rendering, speed, and overall system performance. Professional developer applications demand peak transfer speeds and ultra-high performance; OCZ SSDs offer 100 times faster access to data, quicker boot ups, faster file transfers, and a more responsive computing experience than hard drives.

GDC enables OCZ to team up with valued industries partners like AMD to reaffirm the Company's commitment to the gaming segment, and promote the use of flash storage for both developers and the gamers themselves.

GDC 14: OpenGL ES 3.1 Spec Released by Khronos Group

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 17, 2014 - 09:01 AM |
Tagged: OpenGL ES, opengl, Khronos, gdc 14, GDC

Today, day one of Game Developers Conference 2014, the Khronos Group has officially released the 3.1 specification for OpenGL ES. The main new feature, brought over from OpenGL 4, is the addition of compute shaders. This opens GPGPU functionality to mobile and embedded devices for applications developed in OpenGL ES, especially if the developer does not want to add OpenCL.

The update is backward-compatible with OpenGL ES 2.0 and 3.0 applications, allowing developers to add features, as available, for their existing apps. On the device side, most functionality is expected to be a driver update (in the majority of cases).

opengl-es-logo.png

OpenGL ES, standing for OpenGL for Embedded Systems but is rarely branded as such, delivers what they consider the most important features from the graphics library to the majority of devices. The Khronos Group has been working toward merging ES with the "full" graphics library over time. The last release, OpenGL ES 3.0, was focused on becoming a direct subset of OpenGL 4.3. This release expands upon the feature-space it occupies.

OpenGL ES also forms the basis for WebGL. The current draft of WebGL 2.0 uses OpenGL ES 3.0 although that was not discussed today. I have heard murmurs (not from Khronos) about some parties pushing for compute shaders in that specification, which this announcement puts us closer to.

The new specification also adds other features, such as the ability to issue a draw without CPU intervention. You could imagine a particle simulation, for instance, that wants to draw the result after its compute shader terminates. Shading is also less rigid, where vertex and fragment shaders do not need to be explicitly linked into a program before they are used. I inquired about the possibility that compute devices could be targetted (for devices with two GPUs) and possibly load balanced, in a similar method to WebCL but no confirmation or denial was provided (although he did mention that it would be interesting for apps that fall somewhere in the middle of OpenGL ES and OpenCL).

The OpenGL ES 3.1 spec is available at the Khronos website.

Source: Khronos

GDC 14: Valve's Steam Controller Is Similar to Dev Days

Subject: General Tech, Cases and Cooling, Shows and Expos | March 15, 2014 - 01:44 AM |
Tagged: GDC, gdc 14, valve, Steam Controller

Two months ago, Valve presented a new prototype of their Steam Controller with a significantly changed button layout. While the overall shape and two thumbpads remained constant, the touchscreen disappeared and the face buttons more closely resembled something from an Xbox or PlayStation. Another prototype image has been released, ahead of GDC, without many changes.

valve-DOG_controller_MED.jpg

Valve is still in the iteration process for its controller, however. Ten controllers will be available at GDC, each handmade. This version has been tested internally for some undisclosed amount of time, but this will be the first time that others will give their feedback since the design that was shown at CES. The big unknown is: to what level are they going to respond to feedback? Are we at the stage where it is about button sizing? Or, will it change radically - like to a two-slice toaster case with buttons inside the slots.

GDC is taking place March 17th through the 21st. The expo floor opens on the 19th.

GDC 14: Mozilla & Epic Games Run Unreal Engine 4 in Firefox

Subject: General Tech, Shows and Expos | March 12, 2014 - 09:17 PM |
Tagged: GDC, gdc 14, mozilla, epic games, unreal engine 4

Epic Games has been wanting Unreal Engine in the web browser for quite some time now. Back in 2011, the company presented their Citadel demo running in Flash 11.2. A short while later, Mozilla and Epic ported it to raw JavaScript and WebGL. With the help of asm.js, which is a series of optimizations for JavaScript, Unreal Engine 3 was at home in the browser at near-native speed, with no plugins. Epic's Tim Sweeney and Mark Rein, in an interview with GamaSutra, said that Unreal Engine 4 will take it beyond a demo and target web browsers as a supported platform.

Today, Mozilla teases Unreal Engine 4 running in Firefox, ahead of GDC.

Speaking of speed, asm.js can now reach within 67% of native performance and Mozilla is still optimizing their compiler. While it is difficult to write asm.js-compliant code by hand, companies like Epic are simply compiling their existing C/C++ code through Emscripten into that optimized Javascript. If you have a bit of CPU overhead in your native application, it could little more than a compile away from running in the web browser, possibly any web browser on any platform, without plugins. This obviously has great implications for timeless classics that would otherwise outlive its host platform.

Both Mozilla and Epic will have demos in their booths on the conference floor.

Source: Mozilla

GDC 2014: Crytek's CRYENGINE Adds Linux Support

Subject: General Tech, Shows and Expos | March 11, 2014 - 05:23 PM |
Tagged: gdc 14, crytek, CRYENGINE

The Game Developers Conference (GDC 2014) is getting set for next week in San Francisco and Crytek has an early announcement. Attendees of the event, at presentations and demos in their booth, will see CRYENGINE running natively on Linux. The engine has also been updated to include their enhancements first seen in Ryse, such as "Physically Based Shading".

CE_Character-Individualization-System.jpg

This announcement gives promise to SteamOS as a viable gaming platform because games which license this engine would have an easier time porting over. That said, Unreal Engine has offered Linux compatibility for licensees, to very limited uptake. Sure, Steam could change that trend because a chicken or an egg could happen at some point -- it does not matter which comes first. Still, this is not the first popular engine to be available for Linux.

Their "Physically Based Shading" system is quite interesting, however. As I understand it, the idea is that developers can make (or maybe use) a library of materials and apply it across any game. This should hopefully reduce the number of artist man-hours to produce a generalized optimal shader. It is much slower to tweak specular highlights and vector math than it is to say "you... are gold... be gold".

The official GDC expo will take place March 19th - 21st but I expect news will flood out from now until then.

Source: Crytek

Microsoft, Along with AMD, Intel, NVIDIA, and Qualcomm, Will Announce DirectX 12 at GDC 2014

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 5, 2014 - 08:28 PM |
Tagged: qualcomm, nvidia, microsoft, Intel, gdc 14, GDC, DirectX 12, amd

The announcement of DirectX 12 has been given a date and time via a blog post on the Microsoft Developer Network (MSDN) blogs. On March 20th at 10:00am (I assume PDT), a few days into the 2014 Game Developers Conference in San Francisco, California, the upcoming specification should be detailed for attendees. Apparently, four GPU manufacturers will also be involved with the announcement: AMD, Intel, NVIDIA, and Qualcomm.

microsoft-dx12-gdc-announce.jpg

As we reported last week, DirectX 12 is expected to target increased hardware control and decreased CPU overhead for added performance in "cutting-edge 3D graphics" applications. Really, this is the best time for it. Graphics processors are mostly settled into highly-efficient co-processors of parallel data, with some specialized logic for geometry and video tasks. A new specification can relax the needs of video drivers and thus keep the GPU (or GPUs, in Mantle's case) loaded and utilized.

But, to me, the most interesting part of this announcement is the nod to Qualcomm. Microsoft values DirectX as leverage over other x86 and ARM-based operating systems. With Qualcomm, clearly Microsoft believes that either Windows RT or Windows Phone will benefit from the API's next version. While it will probably make PC gamers nervous, mobile platforms will benefit most from reducing CPU overhead, especially if it can be spread out over multiple cores.

Honestly, that is fine by me. As long as Microsoft returns to treating the PC as a first-class citizen, I do not mind them helping mobile, too. We will definitely keep you up to date as we know more.

Source: MSDN Blogs

MWC 2014: Lenovo S860, S850, S660 Phones Announced

Subject: General Tech, Mobile, Shows and Expos | February 24, 2014 - 12:01 AM |
Tagged: smartphones, MWC 14, MWC, Lenovo

Also at Mobile World Congres, Lenovo expanded their smartphone portfolio with three additions. Each of these belong to the S-series, although they are only loosely related to one another. North American readers will probably not be able to purchase them, of course; Lenovo's US and Canada websites do not even have a section for smartphones (products like the Vibe Z can be searched directly - but are not available). I take that as a sign.

lenovo-mwc14-smartphones.jpg

Anyway, the three phones belong to the S-series but each has a distinct customer in mind. The S860 seems to picture a business user who travels and wants to talk for long periods of time between charges. The similarly named S850 cuts back on RAM and charge capacity, replacing it with aesthetics (colors and an all-glass exterior) and a slightly lower price for users looking for design. Finally, the S660 is the lowest-price of the three, sacrificing things like camera, storage, and screen resolution for users who do not care about any of that.

Let us compare the three phones in a table.

  S860 S850 S660
Display 5.3" 720p 5" 720p 4.7" 960x540
Processor (SoC) MediaTek Quad-Core, 1.3 GHz
RAM 2GB 1GB
Dual SIM Card Yes
Storage 16GB 8GB
Battery Capacity 4000mAh 2000mAh 3000mAh
Battery Life
24 hours
(3G voice)
Unlisted "All Day"

All three phones will be available this year, either at retail or on Lenovo's website. The Lenovo S860 is expected to retail for $349, the S850 should be $269, and the S660 comes in at $229.

Source: Lenovo