glNext Initiative Unveiled at GDC 2015

Subject: Graphics Cards, Shows and Expos | February 4, 2015 - 03:33 AM |
Tagged: OpenGL Next, opengl, glnext, gdc 2015, GDC

The first next-gen, released graphics API was Mantle, which launched a little while after Battlefield 4, but the SDK is still invite-only. The DirectX 12 API quietly launched with the recent Windows 10 Technical Preview, but no drivers, SDK, or software (that we know about) are available to the public yet. The Khronos Group has announced their project, and that's about it currently.

opengl_logo.jpg

According to Develop Magazine, the GDC event listing, and participants, the next OpenGL (currently called “glNext initiative”) will be unveiled at GDC 2015. The talk will be presented by Valve, but it will also include Epic Games, who was closely involved in DirectX 12 with Unreal Engine, Oxide Games and EA/DICE, who were early partners with AMD on Mantle, and Unity, who recently announced support for DirectX 12 when it launches with Windows 10. Basically, this GDC talk includes almost every software developer that came out in early support of either DirectX 12 or Mantle, plus Valve. Off the top of my head, I can only think of FutureMark as unlisted. On the other hand, while they will obviously have driver support from at least one graphics vendor, none are listed. Will we see NVIDIA? Intel? AMD? All of the above? We don't know.

When I last discussed the next OpenGL initiative, it was attempting to parse the naming survey to figure out bits of the technology itself. As it turns out, the talk claims to go deep into the API, with demos, examples, and “real-world applications running on glNext drivers and hardware”. If this information makes it out (and some talks remain private unfortunately although this one looks public) then we should know more about it than what we know about any competing API today. Personally, I am hoping that they spent a lot of effort on the GPGPU side of things, sort-of building graphics atop it rather than having them be two separate entities. This would be especially good if it could be sandboxed for web applications.

This could get interesting.

Source: GDC

Graphics Developers: Help Name Next Generation OpenGL

Subject: General Tech, Graphics Cards | January 16, 2015 - 10:37 PM |
Tagged: Khronos, opengl, OpenGL ES, webgl, OpenGL Next

The Khornos Group probably wants some advice from graphics developers because they ultimately want to market to them, as the future platform's success depends on their applications. If you develop games or other software (web browsers?) then you can give your feedback. If not, then it's probably best to leave responses to its target demographic.

opengl_logo.jpg

As for the questions themselves, first and foremost they ask if you are (or were) an active software developer. From there, they ask you to score your opinion on OpenGL, OpenGL ES, and WebGL. They then ask whether you value “Open” or “GL” in the title. They then ask you whether you feel like OpenGL, OpenGL ES, and WebGL are related APIs. They ask how you learn about the Khronos APIs. Finally, they directly ask you for name suggestions and any final commentary.

Now it is time to (metaphorically) read tea leaves. The survey seems written primarily to establish whether developers consider OpenGL, OpenGL ES, and WebGL as related libraries, and to gauge their overall interest in each. If you look at the way OpenGL ES has been developing, it has slowly brought mobile graphics into a subset of desktop GPU features. It is basically an on-ramp to full OpenGL.

We expect that, like Mantle and DirectX 12, the next OpenGL initiative will be designed around efficiently loading massively parallel processors, with a little bit of fixed-function hardware for common tasks, like rasterizing triangles into fragments. The name survey might be implying that the Next Generation OpenGL Initiative is intended to be a unified platform, for high-end, mobile, and even web. Again, modern graphics APIs are based on loading massively parallel processors as directly as possible.

If you are a graphics developer, the Khronos Group is asking for your feedback via their survey.

Khronos Announces "Next" OpenGL & Releases OpenGL 4.5

Subject: General Tech, Graphics Cards, Shows and Expos | August 15, 2014 - 08:33 PM |
Tagged: siggraph 2014, Siggraph, OpenGL Next, opengl 4.5, opengl, nvidia, Mantle, Khronos, Intel, DirectX 12, amd

Let's be clear: there are two stories here. The first is the release of OpenGL 4.5 and the second is the announcement of the "Next Generation OpenGL Initiative". They both occur on the same press release, but they are two, different statements.

OpenGL 4.5 Released

OpenGL 4.5 expands the core specification with a few extensions. Compatible hardware, with OpenGL 4.5 drivers, will be guaranteed to support these. This includes features like direct_state_access, which allows accessing objects in a context without binding to it, and support of OpenGL ES3.1 features that are traditionally missing from OpenGL 4, which allows easier porting of OpenGL ES3.1 applications to OpenGL.

opengl_logo.jpg

It also adds a few new extensions as an option:

ARB_pipeline_statistics_query lets a developer ask the GPU what it has been doing. This could be useful for "profiling" an application (list completed work to identify optimization points).

ARB_sparse_buffer allows developers to perform calculations on pieces of generic buffers, without loading it all into memory. This is similar to ARB_sparse_textures... except that those are for textures. Buffers are useful for things like vertex data (and so forth).

ARB_transform_feedback_overflow_query is apparently designed to let developers choose whether or not to draw objects based on whether the buffer is overflowed. I might be wrong, but it seems like this would be useful for deciding whether or not to draw objects generated by geometry shaders.

KHR_blend_equation_advanced allows new blending equations between objects. If you use Photoshop, this would be "multiply", "screen", "darken", "lighten", "difference", and so forth. On NVIDIA's side, this will be directly supported on Maxwell and Tegra K1 (and later). Fermi and Kepler will support the functionality, but the driver will perform the calculations with shaders. AMD has yet to comment, as far as I can tell.

nvidia-opengl-debugger.jpg

Image from NVIDIA GTC Presentation

If you are a developer, NVIDIA has launched 340.65 (340.23.01 for Linux) beta drivers for developers. If you are not looking to create OpenGL 4.5 applications, do not get this driver. You really should not have any use for it, at all.

Next Generation OpenGL Initiative Announced

The Khronos Group has also announced "a call for participation" to outline a new specification for graphics and compute. They want it to allow developers explicit control over CPU and GPU tasks, be multithreaded, have minimal overhead, have a common shader language, and "rigorous conformance testing". This sounds a lot like the design goals of Mantle (and what we know of DirectX 12).

amd-mantle-queues.jpg

And really, from what I hear and understand, that is what OpenGL needs at this point. Graphics cards look nothing like they did a decade ago (or over two decades ago). They each have very similar interfaces and data structures, even if their fundamental architectures vary greatly. If we can draw a line in the sand, legacy APIs can be supported but not optimized heavily by the drivers. After a short time, available performance for legacy applications would be so high that it wouldn't matter, as long as they continue to run.

Add to it, next-generation drivers should be significantly easier to develop, considering the reduced error checking (and other responsibilities). As I said on Intel's DirectX 12 story, it is still unclear whether it will lead to enough performance increase to make most optimizations, such as those which increase workload or developer effort in exchange for queuing fewer GPU commands, unnecessary. We will need to wait for game developers to use it for a bit before we know.

Prying OpenGL to slip a little Mantle inside

Subject: General Tech | August 15, 2014 - 01:09 PM |
Tagged: amd, Mantle, opengl, OpenGL Next

Along with his announcements about FreeSync, Richard Huddy also discussed OpenGL Next and its relationship with Mantle and the role it played in DirectX 12's development.  AMD has given Chronos Group, the developers of OpenGL, complete access to Mantle to help them integrate it into future versions of the API starting with OpenGL Next.  He also discussed the advantages of Mantle over DirectX, citing AMD's ability to update it much more frequently than Intel has done with DX.  With over 75 developers working on titles that take advantage of Mantle the interest is definitely there but it is uncertain if devs will actually benefit from an API which updates at a pace faster than a game can be developed.  Read on at The Tech Report.

Richard Huddy-578-80.jpg

"At Siggraph yesterday, AMD's Richard Huddy gave us an update on Mantle, and he also revealed some interesting details about AMD's role in the development of the next-gen OpenGL API."

Here is some more Tech News from around the web:

Tech Talk

Google I/O 2014: Android Extension Pack Announced

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | July 7, 2014 - 04:06 AM |
Tagged: tegra k1, OpenGL ES, opengl, Khronos, google io, google, android extension pack, Android

Sure, this is a little late. Honestly, when I first heard the announcement, I did not see much news in it. The slide from the keynote (below) showed four points: Tesselation, Geometry Shaders, Computer [sic] Shaders, and ASTC Texture Compression. Honestly, I thought tesselation and geometry shaders were part of the OpenGL ES 3.1 spec, like compute shaders. This led to my immediate reaction: "Oh cool. They implemented OpenGL ES 3.1. Nice. Not worth a news post."

google-android-opengl-es-extensions.jpg

Image Credit: Blogogist

Apparently, they were not part of the ES 3.1 spec (although compute shaders are). My mistake. It turns out that Google is cooking their their own vendor-specific extensions. This is quite interesting, as it adds functionality to the API without the developer needing to target a specific GPU vendor (INTEL, NV, ATI, AMD), waiting for approval from the Architecture Review Board (ARB), or using multi-vendor extensions (EXT). In other words, it sounds like developers can target Google's vendor without knowing the actual hardware.

Hiding the GPU vendor from the developer is not the only reason for Google to host their own vendor extension. The added features are mostly from full OpenGL. This makes sense, because it was announced with NVIDIA and their Tegra K1, Kepler-based SoC. Full OpenGL compatibility was NVIDIA's selling point for the K1, due to its heritage as a desktop GPU. But, instead of requiring apps to be programmed with full OpenGL in mind, Google's extension pushes it to OpenGL ES 3.1. If the developer wants to dip their toe into OpenGL, then they could add a few Android Extension Pack features to their existing ES engine.

Epic Games' Unreal Engine 4 "Rivalry" Demo from Google I/O 2014.

The last feature, ASTC Texture Compression, was an interesting one. Apparently the Khronos Group, owners of OpenGL, were looking for a new generation of texture compression technologies. NVIDIA suggested their ZIL technology. ARM and AMD also proposed "Adaptive Scalable Texture Compression". ARM and AMD won, although the Khronos Group stated that the collaboration between ARM and NVIDIA made both proposals better than either in isolation.

Android Extension Pack is set to launch with "Android L". The next release of Android is not currently associated with a snack food. If I was their marketer, I would block out the next three versions as 5.x, and name them (L)emon, then (M)eringue, and finally (P)ie.

Would I do anything with the two skipped letters before pie? (N)(O).

Why you don't see more OpenGL games

Subject: General Tech | May 13, 2014 - 12:40 PM |
Tagged: opengl, Intel, amd, nividia, graphics drivers

If you have ever wondered what happened to OpenGL games which used to be common then there is a good post to read over on Slashdot.  A developer paints an honest and somewhat depressing picture of what it takes to write working OpenGL code in this day and age.  In his mind the blame lies squarely on the driver teams at the three major graphics vendors, with different issues with each of them.  While officially referred to as Vendors A, B and C anyone even slightly familiar with the market will figure out exactly which companies are being referred to.  While this is a topic worthy of ranting comments be aware that this refers specifically to the OpenGL driver, not the DirectX or Mantle drivers and each company has it's own way of making programmers lives difficult, none are without blame.

images.jpg

"Rich Geldreich (game/graphics programmer) has made a blog post on the quality of different OpenGL Drivers. Using anonymous titles (Vendor A: Nvidia; Vendor B: AMD; Vendor C: Intel), he plots the landscape of game development using OpenGL. Vendor A, jovially known as 'Graphics Mafia' concentrates heavily on performance but won't share its specifications, thus blocking any open source driver implementations as much as possible. Vendor B has the most flaky drivers. They have good technical know-how on OpenGL but due to an extremely small team (money woes), they have shoddy drivers. Vendor C is extremely rich."

Here is some more Tech News from around the web:

Tech Talk

Source: Slashdot

GDC wasn't just about DirectX; OpenGL was also a hot topic

Subject: General Tech | March 24, 2014 - 12:26 PM |
Tagged: opengl, nvidia, gdc 14, GDC, amd, Intel

DX12 and its Mantle-like qualities garnered the most interest from gamers at GDC but an odd trio of companies were also pushing a different API.  OpenGL has been around for over 20 years and has waged a long war against Direct3D, a war which may be intensifying again.  Representatives from Intel, AMD and NVIDIA all took to the stage to praise the new OpenGL standard, suggesting that with a tweaked implementation of OpenGL developers could expect to see performance increases between 7 to 15 times.  The Inquirer has embedded an hour long video in their story, check it out to learn more.

slide-1-638.jpg

"CHIP DESIGNERS AMD, Intel and Nvidia teamed up to tout the advantages of the OpenGL multi-platform application programming interface (API) at this year's Game Developers Conference (GDC)."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer
Author:
Manufacturer: NVIDIA

DX11 could rival Mantle

The big story at GDC last week was Microsoft’s reveal of DirectX 12 and the future of the dominant API for PC gaming.  There was plenty of build up to the announcement with Microsoft’s DirectX team posting teasers and starting up a Twitter account of the occasion. I hosted a live blog from the event which included pictures of the slides. It was our most successful of these types of events with literally thousands of people joining in the conversation. Along with the debates over the similarities of AMD’s Mantle API and the timeline for DX12 release, there are plenty of stories to be told.

After the initial session, I wanted to setup meetings with both AMD and NVIDIA to discuss what had been shown and get some feedback on the planned direction for the GPU giants’ implementations.  NVIDIA presented us with a very interesting set of data that both focused on the future with DX12, but also on the now of DirectX 11.

15.jpg

The reason for the topic is easy to decipher – AMD has built up the image of Mantle as the future of PC gaming and, with a full 18 months before Microsoft’s DirectX 12 being released, how developers and gamers respond will make an important impact on the market. NVIDIA doesn’t like to talk about Mantle directly, but it’s obvious that it feels the need to address the questions in a roundabout fashion. During our time with NVIDIA’s Tony Tamasi at GDC, the discussion centered as much on OpenGL and DirectX 11 as anything else.

What are APIs and why do you care?

For those that might not really understand what DirectX and OpenGL are, a bit of background first. APIs (application programming interface) are responsible for providing an abstraction layer between hardware and software applications.  An API can deliver consistent programming models (though the language can vary) and do so across various hardware vendors products and even between hardware generations.  They can provide access to feature sets of hardware that have a wide range in complexity, but allow users access to hardware without necessarily knowing great detail about it.

Over the years, APIs have developed and evolved but still retain backwards compatibility.  Companies like NVIDIA and AMD can improve DirectX implementations to increase performance or efficiency without adversely (usually at least) affecting other games or applications.  And because the games use that same API for programming, changes to how NVIDIA/AMD handle the API integration don’t require game developer intervention.

With the release of AMD Mantle, the idea of a “low level” API has been placed in the minds of gamers and developers.  The term “low level” can mean many things, but in general it is associated with an API that is more direct, has a thinner set of abstraction layers, and uses less translation from code to hardware.  The goal is to reduce the amount of overhead (performance hit) that APIs naturally impair for these translations.  With additional performance available, the CPU cycles can be used by the program (game) or be slept to improve battery life. In certain cases, GPU throughput can increase where the API overhead is impeding the video card's progress.

Passing additional control to the game developers, away from the API or GPU driver developers, gives those coders additional power and improves the ability for some vendors to differentiate. Interestingly, not all developers want this kind of control as it requires more time, more development work, and small teams that depend on that abstraction to make coding easier will only see limited performance advantages.

The reasons for this transition to a lower level API is being driven the by widening gap of performance between CPU and GPUs.  NVIDIA provided the images below.

04.jpg

On the left we see performance scaling in terms of GFLOPS and on the right the metric is memory bandwidth. Clearly the performance of NVIDIA's graphics chips has far outpaced (as have AMD’s) what the best Intel desktop processor have been able and that gap means that the industry needs to innovate to find ways to close it.

Continue reading NVIDIA Talks DX12, DX11 Efficiency Improvements!!!

GDC 14: NVIDIA, AMD, and Intel Discuss OpenGL Speed-ups

Subject: General Tech, Shows and Expos | March 22, 2014 - 01:41 AM |
Tagged: opengl, nvidia, Intel, gdc 14, GDC, amd

So, for all the discussion about DirectX 12, the three main desktop GPU vendors, NVIDIA, AMD, and Intel, want to tell OpenGL developers how to tune their applications. Using OpenGL 4.2 and a few cross-vendor extensions, because OpenGL is all about its extensions, a handful of known tricks can reduce driver overhead up to ten-fold and increase performance up to fifteen-fold. The talk is very graphics developer-centric, but it basically describes a series of tricks known to accomplish feats similar to what Mantle and DirectX 12 suggest.

opengl_logo.jpg

The 130-slide presentation is broken into a few sections, each GPU vendor getting a decent chunk of time. On occasion, they would mention which implementation fairs better with one function call. The main point that they wanted to drive home (since they clearly repeated the slide three times with three different fonts) is that none of this requires a new API. Everything exists and can be implemented right now. The real trick is to know how to not poke the graphics library in the wrong way.

The page also hosts a keynote from the recent Steam Dev Days.

That said, an advantage that I expect from DirectX 12 and Mantle is reduced driver complexity. Since the processors have settled into standards, I expect that drivers will not need to do as much unless the library demands it for legacy reasons. I am not sure how extending OpenGL will affect that benefit, as opposed to just isolating the legacy and building on a solid foundation, but I wonder if these extensions could be just as easy to maintain and optimize. Maybe it is.

Either way, the performance figures do not lie.

Source: NVIDIA

GDC 14: EGL 1.5 Specification Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:02 AM |
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL

The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.

khronos-EGL_500_123_75.png

The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.

During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.

The EGL 1.5 spec is available at the Khronos website.

Source: Khronos