GDC wasn't just about DirectX; OpenGL was also a hot topic

Subject: General Tech | March 24, 2014 - 12:26 PM |
Tagged: opengl, nvidia, gdc 14, GDC, amd, Intel

DX12 and its Mantle-like qualities garnered the most interest from gamers at GDC but an odd trio of companies were also pushing a different API.  OpenGL has been around for over 20 years and has waged a long war against Direct3D, a war which may be intensifying again.  Representatives from Intel, AMD and NVIDIA all took to the stage to praise the new OpenGL standard, suggesting that with a tweaked implementation of OpenGL developers could expect to see performance increases between 7 to 15 times.  The Inquirer has embedded an hour long video in their story, check it out to learn more.

slide-1-638.jpg

"CHIP DESIGNERS AMD, Intel and Nvidia teamed up to tout the advantages of the OpenGL multi-platform application programming interface (API) at this year's Game Developers Conference (GDC)."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer
Author:
Manufacturer: NVIDIA

DX11 could rival Mantle

The big story at GDC last week was Microsoft’s reveal of DirectX 12 and the future of the dominant API for PC gaming.  There was plenty of build up to the announcement with Microsoft’s DirectX team posting teasers and starting up a Twitter account of the occasion. I hosted a live blog from the event which included pictures of the slides. It was our most successful of these types of events with literally thousands of people joining in the conversation. Along with the debates over the similarities of AMD’s Mantle API and the timeline for DX12 release, there are plenty of stories to be told.

After the initial session, I wanted to setup meetings with both AMD and NVIDIA to discuss what had been shown and get some feedback on the planned direction for the GPU giants’ implementations.  NVIDIA presented us with a very interesting set of data that both focused on the future with DX12, but also on the now of DirectX 11.

15.jpg

The reason for the topic is easy to decipher – AMD has built up the image of Mantle as the future of PC gaming and, with a full 18 months before Microsoft’s DirectX 12 being released, how developers and gamers respond will make an important impact on the market. NVIDIA doesn’t like to talk about Mantle directly, but it’s obvious that it feels the need to address the questions in a roundabout fashion. During our time with NVIDIA’s Tony Tamasi at GDC, the discussion centered as much on OpenGL and DirectX 11 as anything else.

What are APIs and why do you care?

For those that might not really understand what DirectX and OpenGL are, a bit of background first. APIs (application programming interface) are responsible for providing an abstraction layer between hardware and software applications.  An API can deliver consistent programming models (though the language can vary) and do so across various hardware vendors products and even between hardware generations.  They can provide access to feature sets of hardware that have a wide range in complexity, but allow users access to hardware without necessarily knowing great detail about it.

Over the years, APIs have developed and evolved but still retain backwards compatibility.  Companies like NVIDIA and AMD can improve DirectX implementations to increase performance or efficiency without adversely (usually at least) affecting other games or applications.  And because the games use that same API for programming, changes to how NVIDIA/AMD handle the API integration don’t require game developer intervention.

With the release of AMD Mantle, the idea of a “low level” API has been placed in the minds of gamers and developers.  The term “low level” can mean many things, but in general it is associated with an API that is more direct, has a thinner set of abstraction layers, and uses less translation from code to hardware.  The goal is to reduce the amount of overhead (performance hit) that APIs naturally impair for these translations.  With additional performance available, the CPU cycles can be used by the program (game) or be slept to improve battery life. In certain cases, GPU throughput can increase where the API overhead is impeding the video card's progress.

Passing additional control to the game developers, away from the API or GPU driver developers, gives those coders additional power and improves the ability for some vendors to differentiate. Interestingly, not all developers want this kind of control as it requires more time, more development work, and small teams that depend on that abstraction to make coding easier will only see limited performance advantages.

The reasons for this transition to a lower level API is being driven the by widening gap of performance between CPU and GPUs.  NVIDIA provided the images below.

04.jpg

On the left we see performance scaling in terms of GFLOPS and on the right the metric is memory bandwidth. Clearly the performance of NVIDIA's graphics chips has far outpaced (as have AMD’s) what the best Intel desktop processor have been able and that gap means that the industry needs to innovate to find ways to close it.

Continue reading NVIDIA Talks DX12, DX11 Efficiency Improvements!!!

GDC 14: NVIDIA, AMD, and Intel Discuss OpenGL Speed-ups

Subject: General Tech, Shows and Expos | March 22, 2014 - 01:41 AM |
Tagged: opengl, nvidia, Intel, gdc 14, GDC, amd

So, for all the discussion about DirectX 12, the three main desktop GPU vendors, NVIDIA, AMD, and Intel, want to tell OpenGL developers how to tune their applications. Using OpenGL 4.2 and a few cross-vendor extensions, because OpenGL is all about its extensions, a handful of known tricks can reduce driver overhead up to ten-fold and increase performance up to fifteen-fold. The talk is very graphics developer-centric, but it basically describes a series of tricks known to accomplish feats similar to what Mantle and DirectX 12 suggest.

opengl_logo.jpg

The 130-slide presentation is broken into a few sections, each GPU vendor getting a decent chunk of time. On occasion, they would mention which implementation fairs better with one function call. The main point that they wanted to drive home (since they clearly repeated the slide three times with three different fonts) is that none of this requires a new API. Everything exists and can be implemented right now. The real trick is to know how to not poke the graphics library in the wrong way.

The page also hosts a keynote from the recent Steam Dev Days.

That said, an advantage that I expect from DirectX 12 and Mantle is reduced driver complexity. Since the processors have settled into standards, I expect that drivers will not need to do as much unless the library demands it for legacy reasons. I am not sure how extending OpenGL will affect that benefit, as opposed to just isolating the legacy and building on a solid foundation, but I wonder if these extensions could be just as easy to maintain and optimize. Maybe it is.

Either way, the performance figures do not lie.

Source: NVIDIA

GDC 14: EGL 1.5 Specification Released by Khronos

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:02 AM |
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL

The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.

khronos-EGL_500_123_75.png

The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.

During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.

The EGL 1.5 spec is available at the Khronos website.

Source: Khronos

GDC 14: OpenGL ES 3.1 Spec Released by Khronos Group

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 17, 2014 - 09:01 AM |
Tagged: OpenGL ES, opengl, Khronos, gdc 14, GDC

Today, day one of Game Developers Conference 2014, the Khronos Group has officially released the 3.1 specification for OpenGL ES. The main new feature, brought over from OpenGL 4, is the addition of compute shaders. This opens GPGPU functionality to mobile and embedded devices for applications developed in OpenGL ES, especially if the developer does not want to add OpenCL.

The update is backward-compatible with OpenGL ES 2.0 and 3.0 applications, allowing developers to add features, as available, for their existing apps. On the device side, most functionality is expected to be a driver update (in the majority of cases).

opengl-es-logo.png

OpenGL ES, standing for OpenGL for Embedded Systems but is rarely branded as such, delivers what they consider the most important features from the graphics library to the majority of devices. The Khronos Group has been working toward merging ES with the "full" graphics library over time. The last release, OpenGL ES 3.0, was focused on becoming a direct subset of OpenGL 4.3. This release expands upon the feature-space it occupies.

OpenGL ES also forms the basis for WebGL. The current draft of WebGL 2.0 uses OpenGL ES 3.0 although that was not discussed today. I have heard murmurs (not from Khronos) about some parties pushing for compute shaders in that specification, which this announcement puts us closer to.

The new specification also adds other features, such as the ability to issue a draw without CPU intervention. You could imagine a particle simulation, for instance, that wants to draw the result after its compute shader terminates. Shading is also less rigid, where vertex and fragment shaders do not need to be explicitly linked into a program before they are used. I inquired about the possibility that compute devices could be targetted (for devices with two GPUs) and possibly load balanced, in a similar method to WebCL but no confirmation or denial was provided (although he did mention that it would be interesting for apps that fall somewhere in the middle of OpenGL ES and OpenCL).

The OpenGL ES 3.1 spec is available at the Khronos website.

Source: Khronos

Valve's Direct3D to OpenGL Translator (Or Part of It)

Subject: Editorial, General Tech | March 11, 2014 - 10:15 PM |
Tagged: valve, opengl, DirectX

Late yesterday night, Valve released source code from their "ToGL" transition layer. This bundle of code sits between "[a] limited subset of Direct3D 9.0c" and OpenGL to translate engines which are designed in the former, into the latter. It was pulled out of the DOTA 2 source tree and published standalone... mostly. Basically, it is completely unsupported and probably will not even build without some other chunks of the Source engine.

valve-dx-opengl.jpg

Still, Valve did not need to release this code, but they did. How a lot of open-source projects work is that someone dumps a starting blob, and if sufficient, the community pokes and prods it to mold it into a self-sustaining entity. The real question is whether the code that Valve provided is sufficient. As often is the case, time will tell. Either way, this is a good thing that other companies really should embrace: giving out your old code to further the collective. We are just not sure how good.

ToGL is available now at Valve's GitHub page under the permissive, non-copyleft MIT license.

Source: Valve GitHub

DirectX 12 and a new OpenGL to challenge AMD Mantle coming at GDC?

Subject: Graphics Cards | February 26, 2014 - 06:17 PM |
Tagged: opengl, nvidia, Mantle, gdc 14, GDC, DirectX 12, DirectX, amd

UPDATE (2/27/14): AMD sent over a statement today after seeing our story.  

AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle, and AMD is very excited to share the future of our own API with developers at this year’s Game Developers Conference.

Credit over to Scott and his reader at The Tech Report for spotting this interesting news today!!

It appears that DirectX and OpenGL are going to be announcing some changes at next month's Game Developers Conference in San Francisco.  According to some information found in the session details, both APIs are trying to steal some of the thunder from AMD's Mantle, recently released with the Battlefield 4 patch.  Mantle is na API was built by AMD to enable more direct access (lower level) to its GCN graphics hardware allowing developers to code games that are more efficient, providing better performance for the PC gamer.

mantle.jpg

From the session titled DirectX: Evolving Microsoft's Graphics Platform we find this description (emphasis mine):

For nearly 20 years, DirectX has been the platform used by game developers to create the fastest, most visually impressive games on the planet.

However, you asked us to do more. You asked us to bring you even closer to the metal and to do so on an unparalleled assortment of hardware. You also asked us for better tools so that you can squeeze every last drop of performance out of your PC, tablet, phone and console.

Come learn our plans to deliver.

Another DirectX session hosted by Microsoft is titled DirectX: Direct3D Futures (emphasis mine): 

Come learn how future changes to Direct3D will enable next generation games to run faster than ever before!

In this session we will discuss future improvements in Direct3D that will allow developers an unprecedented level of hardware control and reduced CPU rendering overhead across a broad ecosystem of hardware. 

If you use cutting-edge 3D graphics in your games, middleware, or engines and want to efficiently build rich and immersive visuals, you don't want to miss this talk.

Now look at a line from our initial article on AMD Mantle when announced at its Hawaii tech day event:

It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.

This is all sounding very familiar.  It would appear that Microsoft has finally been listening to the development community and is working on the performance aspects of DirectX.  Likely due in no small part to the push of AMD and Mantle's development, an updated DirectX 12 that includes a similar feature set and similar performance changes would shift the market in a few key ways.

olddirectx.jpg

Is it time again for innovation with DirectX?

First and foremost, what does this do for AMD's Mantle in the near or distant future?  For now, BF4 will still include Mantle support as will games like Thief (update pending) but going forward, if these DX12 changes are as specific as I am being led to believe, then it would be hard to see anyone really sticking with the AMD-only route.  Of course, if DX12 doesn't really address the performance and overhead issues in the same way that Mantle does then all bets are off and we are back to square one.

Interestingly, OpenGL might also be getting into the ring with the session Approaching Zero Driver Overhead in OpenGL

Driver overhead has been a frustrating reality for game developers for the entire life of the PC game industry. On desktop systems, driver overhead can decrease frame rate, while on mobile devices driver overhead is more insidious--robbing both battery life and frame rate. In this unprecedented sponsored session, Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today's OpenGL implementations that radically reduce driver overhead--by up to 10x or more. The techniques presented will apply to all major vendors and are suitable for use across multiple platforms. Additionally, they will demonstrate practical demos of the techniques in action in an extensible, open source comparison framework.

This description seems to indicate more about new or lesser known programming methods that can be used with OpenGL to lower overhead without the need for custom APIs or even DX12.  This could be new modules from vendors or possibly a new revision to OpenGL - we'll find out next month.

All of this leaves us with a lot of questions that will hopefully be answered when we get to GDC in mid-March.  Will this new version of DirectX be enough to reduce API overhead to appease even the stingiest of game developers?  How will AMD react to this new competitor to Mantle (or was Mantle really only created to push this process along)?  What time frame does Microsoft have on DX12?  Does this save NVIDIA from any more pressure to build its own custom API?

Gaming continues to be the driving factor of excitement and innovation for the PC!  Stay tuned for an exciting spring!

Source: Tech Report

Podcast #285 - Frame Rating AMD Dual Graphics with Kaveri, Linux GPU Performance, and Dogecoin Mining!

Subject: General Tech | January 30, 2014 - 02:32 PM |
Tagged: podcast, frame rating, video, amd, Kaveri, A10 7850K, dual graphics, linux, opengl, Lenovo, IBM

PC Perspective Podcast #285 - 01/30/2014

Join us this week as we discuss Frame Rating AMD Dual Graphics with Kaveri, Linux GPU Performance, and Dogecoin Mining!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano

 
Program length: 1:02:01
  1. Week in Review:
  2. News items of interest:
    1. 0:37:45 Quick Linux mention
      1. And Motorola Mobility
  3. Hardware/Software Picks of the Week:
  4. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

 

NVIDIA still holds the OpenGL crown on Linux; AMD is getting better though

Subject: General Tech | January 23, 2014 - 02:58 PM |
Tagged: opengl, linux, amd, nvidia

If you are a Linux user who prefers to use OpenGL graphics there is still a huge benefit to choosing NVIDIA over AMD.  The tests Phoronix just completed show that the GTX680, 770 and 780 all perform significantly faster than the R9 290 with even the older GTX 550 Ti and 650 GPUs outperforming AMD's best in some benchmarks.  That said AMD is making important improvements to their open source drivers as that is where they are lagging behind NVIDIA.  The new RadeonSI Gallium3D for the HD7000 series shows significant performance improvements when paired with the new 3.13 kernel though still falling a bit behind the Catalyst driver they are now much closer to the performance of the proprietary driver.  For older cards the performance increase is nowhere near as impressive but some certain benchmarks do show this Gallium3D driver to provide at least some improvements.  Pity the Source engine isn't behaving properly during benchmarks which is why no tests were run on Valve's games but that should be solved in the near future.

image.php_.jpg

"In new tests conducted last week with the latest AMD and NVIDIA binary graphics drivers, the high-end AMD GPUs still really aren't proving much competition to NVIDIA's Kepler graphics cards. Here's a new 12 graphics card comparison on Ubuntu."

Here is some more Tech News from around the web:

Tech Talk

Source: Phoronix

Windows versus Linux in an OpenGL free for al

Subject: General Tech | April 8, 2013 - 01:59 PM |
Tagged: linux, ubuntu 13.04, fedora 18, win7, opengl, Ivy Bridge

One major barrier to switching to Linux for many users is the graphical performance of the OS; Steam may be releasing a variety of games which will run on Linux but if the performance is awful there are not going to be many who think about making the switch.  Phoronix has been a close eye on the development of OpenGL drivers for Linux, this time specifically the onboard Intel graphics present on Ivy Bridge chips.  With one driver available for each OS the tests were easily set up, except for the aforementioned Steam games as there is a bug which prevents Phoronix from collecting the performance data they need.  Check out the performance differences between Ubuntu 13.04, Fedora 18 and Win7 in the full article.

Ubuntu_Raring_Ringtail_Wallpapers_01.jpg

"Last month Phoronix published Intel OpenGL benchmarks showing Windows 8 outperforming Ubuntu 13.04 with the latest Windows and Linux drivers from Intel. I also showed that even with the KDE and Xfce desktops rather than the default Unity/Compiz desktop to Ubuntu, Windows 8 still was faster on this Intel "Ivy Bridge" platform. The new benchmarks to share today from this Intel Ultrabook are the Windows 8 and Ubuntu 13.04 results but also with performance figures added in from Microsoft Windows 7 Professional Service Pack 1 x64 and Fedora 18."

Here is some more Tech News from around the web:

Tech Talk

Source: Phoronix