Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | July 7, 2014 - 04:06 AM | Scott Michaud
Tagged: tegra k1, OpenGL ES, opengl, Khronos, google io, google, android extension pack, Android
Sure, this is a little late. Honestly, when I first heard the announcement, I did not see much news in it. The slide from the keynote (below) showed four points: Tesselation, Geometry Shaders, Computer [sic] Shaders, and ASTC Texture Compression. Honestly, I thought tesselation and geometry shaders were part of the OpenGL ES 3.1 spec, like compute shaders. This led to my immediate reaction: "Oh cool. They implemented OpenGL ES 3.1. Nice. Not worth a news post."
Image Credit: Blogogist
Apparently, they were not part of the ES 3.1 spec (although compute shaders are). My mistake. It turns out that Google is cooking their their own vendor-specific extensions. This is quite interesting, as it adds functionality to the API without the developer needing to target a specific GPU vendor (INTEL, NV, ATI, AMD), waiting for approval from the Architecture Review Board (ARB), or using multi-vendor extensions (EXT). In other words, it sounds like developers can target Google's vendor without knowing the actual hardware.
Hiding the GPU vendor from the developer is not the only reason for Google to host their own vendor extension. The added features are mostly from full OpenGL. This makes sense, because it was announced with NVIDIA and their Tegra K1, Kepler-based SoC. Full OpenGL compatibility was NVIDIA's selling point for the K1, due to its heritage as a desktop GPU. But, instead of requiring apps to be programmed with full OpenGL in mind, Google's extension pushes it to OpenGL ES 3.1. If the developer wants to dip their toe into OpenGL, then they could add a few Android Extension Pack features to their existing ES engine.
Epic Games' Unreal Engine 4 "Rivalry" Demo from Google I/O 2014.
The last feature, ASTC Texture Compression, was an interesting one. Apparently the Khronos Group, owners of OpenGL, were looking for a new generation of texture compression technologies. NVIDIA suggested their ZIL technology. ARM and AMD also proposed "Adaptive Scalable Texture Compression". ARM and AMD won, although the Khronos Group stated that the collaboration between ARM and NVIDIA made both proposals better than either in isolation.
Android Extension Pack is set to launch with "Android L". The next release of Android is not currently associated with a snack food. If I was their marketer, I would block out the next three versions as 5.x, and name them (L)emon, then (M)eringue, and finally (P)ie.
Would I do anything with the two skipped letters before pie? (N)(O).
Subject: General Tech | May 13, 2014 - 12:40 PM | Jeremy Hellstrom
Tagged: opengl, Intel, amd, nividia, graphics drivers
If you have ever wondered what happened to OpenGL games which used to be common then there is a good post to read over on Slashdot. A developer paints an honest and somewhat depressing picture of what it takes to write working OpenGL code in this day and age. In his mind the blame lies squarely on the driver teams at the three major graphics vendors, with different issues with each of them. While officially referred to as Vendors A, B and C anyone even slightly familiar with the market will figure out exactly which companies are being referred to. While this is a topic worthy of ranting comments be aware that this refers specifically to the OpenGL driver, not the DirectX or Mantle drivers and each company has it's own way of making programmers lives difficult, none are without blame.
"Rich Geldreich (game/graphics programmer) has made a blog post on the quality of different OpenGL Drivers. Using anonymous titles (Vendor A: Nvidia; Vendor B: AMD; Vendor C: Intel), he plots the landscape of game development using OpenGL. Vendor A, jovially known as 'Graphics Mafia' concentrates heavily on performance but won't share its specifications, thus blocking any open source driver implementations as much as possible. Vendor B has the most flaky drivers. They have good technical know-how on OpenGL but due to an extremely small team (money woes), they have shoddy drivers. Vendor C is extremely rich."
Here is some more Tech News from around the web:
- Qualcomm plans to shift 20nm orders from TSMC to Samsung or Globalfoundries, say sources @ DigiTimes
- NSA is accused of sneaking backdoors into hardware exports @ The Inquirer
- Mozilla axes HATED ads-in-Firefox tab ... but they won't stay dead for long @ The Register
- The Illusion of Overclocking Support @ Hardware Asylum
- WIN Awesome i5 4690 CYBERPOWER Z97 PC @ Kitguru
Subject: General Tech | March 24, 2014 - 12:26 PM | Jeremy Hellstrom
Tagged: opengl, nvidia, gdc 14, GDC, amd, Intel
DX12 and its Mantle-like qualities garnered the most interest from gamers at GDC but an odd trio of companies were also pushing a different API. OpenGL has been around for over 20 years and has waged a long war against Direct3D, a war which may be intensifying again. Representatives from Intel, AMD and NVIDIA all took to the stage to praise the new OpenGL standard, suggesting that with a tweaked implementation of OpenGL developers could expect to see performance increases between 7 to 15 times. The Inquirer has embedded an hour long video in their story, check it out to learn more.
"CHIP DESIGNERS AMD, Intel and Nvidia teamed up to tout the advantages of the OpenGL multi-platform application programming interface (API) at this year's Game Developers Conference (GDC)."
Here is some more Tech News from around the web:
- The TR Podcast 152: Intel's new desktop mojo, DX12, and TR does subscriptions
- DirectX 12 will also add new features for next-gen GPUs @ The Tech Report
- Malwarebytes offers Windows XP security support before Microsoft's April deadline @ The Inquirer
- Slow SSD Transition and The Consumer Mindset – Learning to Run With Flash @ SSD Review
- AMD Is Exploring A Very Interesting, More-Open Linux Driver Strategy @ Phoronix
- AT&T and Netflix get into very public spat over net neutrality @ The Register
DX11 could rival Mantle
The big story at GDC last week was Microsoft’s reveal of DirectX 12 and the future of the dominant API for PC gaming. There was plenty of build up to the announcement with Microsoft’s DirectX team posting teasers and starting up a Twitter account of the occasion. I hosted a live blog from the event which included pictures of the slides. It was our most successful of these types of events with literally thousands of people joining in the conversation. Along with the debates over the similarities of AMD’s Mantle API and the timeline for DX12 release, there are plenty of stories to be told.
After the initial session, I wanted to setup meetings with both AMD and NVIDIA to discuss what had been shown and get some feedback on the planned direction for the GPU giants’ implementations. NVIDIA presented us with a very interesting set of data that both focused on the future with DX12, but also on the now of DirectX 11.
The reason for the topic is easy to decipher – AMD has built up the image of Mantle as the future of PC gaming and, with a full 18 months before Microsoft’s DirectX 12 being released, how developers and gamers respond will make an important impact on the market. NVIDIA doesn’t like to talk about Mantle directly, but it’s obvious that it feels the need to address the questions in a roundabout fashion. During our time with NVIDIA’s Tony Tamasi at GDC, the discussion centered as much on OpenGL and DirectX 11 as anything else.
What are APIs and why do you care?
For those that might not really understand what DirectX and OpenGL are, a bit of background first. APIs (application programming interface) are responsible for providing an abstraction layer between hardware and software applications. An API can deliver consistent programming models (though the language can vary) and do so across various hardware vendors products and even between hardware generations. They can provide access to feature sets of hardware that have a wide range in complexity, but allow users access to hardware without necessarily knowing great detail about it.
Over the years, APIs have developed and evolved but still retain backwards compatibility. Companies like NVIDIA and AMD can improve DirectX implementations to increase performance or efficiency without adversely (usually at least) affecting other games or applications. And because the games use that same API for programming, changes to how NVIDIA/AMD handle the API integration don’t require game developer intervention.
With the release of AMD Mantle, the idea of a “low level” API has been placed in the minds of gamers and developers. The term “low level” can mean many things, but in general it is associated with an API that is more direct, has a thinner set of abstraction layers, and uses less translation from code to hardware. The goal is to reduce the amount of overhead (performance hit) that APIs naturally impair for these translations. With additional performance available, the CPU cycles can be used by the program (game) or be slept to improve battery life. In certain cases, GPU throughput can increase where the API overhead is impeding the video card's progress.
Passing additional control to the game developers, away from the API or GPU driver developers, gives those coders additional power and improves the ability for some vendors to differentiate. Interestingly, not all developers want this kind of control as it requires more time, more development work, and small teams that depend on that abstraction to make coding easier will only see limited performance advantages.
The reasons for this transition to a lower level API is being driven the by widening gap of performance between CPU and GPUs. NVIDIA provided the images below.
On the left we see performance scaling in terms of GFLOPS and on the right the metric is memory bandwidth. Clearly the performance of NVIDIA's graphics chips has far outpaced (as have AMD’s) what the best Intel desktop processor have been able and that gap means that the industry needs to innovate to find ways to close it.
Subject: General Tech, Shows and Expos | March 22, 2014 - 01:41 AM | Scott Michaud
Tagged: opengl, nvidia, Intel, gdc 14, GDC, amd
So, for all the discussion about DirectX 12, the three main desktop GPU vendors, NVIDIA, AMD, and Intel, want to tell OpenGL developers how to tune their applications. Using OpenGL 4.2 and a few cross-vendor extensions, because OpenGL is all about its extensions, a handful of known tricks can reduce driver overhead up to ten-fold and increase performance up to fifteen-fold. The talk is very graphics developer-centric, but it basically describes a series of tricks known to accomplish feats similar to what Mantle and DirectX 12 suggest.
The 130-slide presentation is broken into a few sections, each GPU vendor getting a decent chunk of time. On occasion, they would mention which implementation fairs better with one function call. The main point that they wanted to drive home (since they clearly repeated the slide three times with three different fonts) is that none of this requires a new API. Everything exists and can be implemented right now. The real trick is to know how to not poke the graphics library in the wrong way.
The page also hosts a keynote from the recent Steam Dev Days.
That said, an advantage that I expect from DirectX 12 and Mantle is reduced driver complexity. Since the processors have settled into standards, I expect that drivers will not need to do as much unless the library demands it for legacy reasons. I am not sure how extending OpenGL will affect that benefit, as opposed to just isolating the legacy and building on a solid foundation, but I wonder if these extensions could be just as easy to maintain and optimize. Maybe it is.
Either way, the performance figures do not lie.
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 19, 2014 - 09:02 AM | Scott Michaud
Tagged: OpenGL ES, opengl, opencl, gdc 14, GDC, EGL
The Khronos Group has also released their ratified specification for EGL 1.5. This API is at the center of data and event management between other Khronos APIs. This version increases security, interoperability between APIs, and support for many operating systems, including Android and 64-bit Linux.
The headline on the list of changes is the move that EGLImage objects makes, from the realm of extension into EGL 1.5's core functionality, giving developers a reliable method of transferring textures and renderbuffers between graphics contexts and APIs. Second on the list is the increased security around creating a graphics context, primarily designed for WebGL applications which any arbitrary website can become. Further down the list is the EGLSync object which allows further partnership between OpenGL (and OpenGL ES) and OpenCL. The GPU may not need CPU involvement when scheduling between tasks on both APIs.
During the call, the representative also wanted to mention that developers have asked them to bring EGL back to Windows. While it has not happened yet, they have announced that it is a current target.
The EGL 1.5 spec is available at the Khronos website.
Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 17, 2014 - 09:01 AM | Scott Michaud
Tagged: OpenGL ES, opengl, Khronos, gdc 14, GDC
Today, day one of Game Developers Conference 2014, the Khronos Group has officially released the 3.1 specification for OpenGL ES. The main new feature, brought over from OpenGL 4, is the addition of compute shaders. This opens GPGPU functionality to mobile and embedded devices for applications developed in OpenGL ES, especially if the developer does not want to add OpenCL.
The update is backward-compatible with OpenGL ES 2.0 and 3.0 applications, allowing developers to add features, as available, for their existing apps. On the device side, most functionality is expected to be a driver update (in the majority of cases).
OpenGL ES, standing for OpenGL for Embedded Systems but is rarely branded as such, delivers what they consider the most important features from the graphics library to the majority of devices. The Khronos Group has been working toward merging ES with the "full" graphics library over time. The last release, OpenGL ES 3.0, was focused on becoming a direct subset of OpenGL 4.3. This release expands upon the feature-space it occupies.
OpenGL ES also forms the basis for WebGL. The current draft of WebGL 2.0 uses OpenGL ES 3.0 although that was not discussed today. I have heard murmurs (not from Khronos) about some parties pushing for compute shaders in that specification, which this announcement puts us closer to.
The new specification also adds other features, such as the ability to issue a draw without CPU intervention. You could imagine a particle simulation, for instance, that wants to draw the result after its compute shader terminates. Shading is also less rigid, where vertex and fragment shaders do not need to be explicitly linked into a program before they are used. I inquired about the possibility that compute devices could be targetted (for devices with two GPUs) and possibly load balanced, in a similar method to WebCL but no confirmation or denial was provided (although he did mention that it would be interesting for apps that fall somewhere in the middle of OpenGL ES and OpenCL).
The OpenGL ES 3.1 spec is available at the Khronos website.
Subject: Editorial, General Tech | March 11, 2014 - 10:15 PM | Scott Michaud
Tagged: valve, opengl, DirectX
Late yesterday night, Valve released source code from their "ToGL" transition layer. This bundle of code sits between "[a] limited subset of Direct3D 9.0c" and OpenGL to translate engines which are designed in the former, into the latter. It was pulled out of the DOTA 2 source tree and published standalone... mostly. Basically, it is completely unsupported and probably will not even build without some other chunks of the Source engine.
Still, Valve did not need to release this code, but they did. How a lot of open-source projects work is that someone dumps a starting blob, and if sufficient, the community pokes and prods it to mold it into a self-sustaining entity. The real question is whether the code that Valve provided is sufficient. As often is the case, time will tell. Either way, this is a good thing that other companies really should embrace: giving out your old code to further the collective. We are just not sure how good.
ToGL is available now at Valve's GitHub page under the permissive, non-copyleft MIT license.
Subject: Graphics Cards | February 26, 2014 - 06:17 PM | Ryan Shrout
Tagged: opengl, nvidia, Mantle, gdc 14, GDC, DirectX 12, DirectX, amd
UPDATE (2/27/14): AMD sent over a statement today after seeing our story.
AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle, and AMD is very excited to share the future of our own API with developers at this year’s Game Developers Conference.
Credit over to Scott and his reader at The Tech Report for spotting this interesting news today!!
It appears that DirectX and OpenGL are going to be announcing some changes at next month's Game Developers Conference in San Francisco. According to some information found in the session details, both APIs are trying to steal some of the thunder from AMD's Mantle, recently released with the Battlefield 4 patch. Mantle is na API was built by AMD to enable more direct access (lower level) to its GCN graphics hardware allowing developers to code games that are more efficient, providing better performance for the PC gamer.
From the session titled DirectX: Evolving Microsoft's Graphics Platform we find this description (emphasis mine):
For nearly 20 years, DirectX has been the platform used by game developers to create the fastest, most visually impressive games on the planet.
However, you asked us to do more. You asked us to bring you even closer to the metal and to do so on an unparalleled assortment of hardware. You also asked us for better tools so that you can squeeze every last drop of performance out of your PC, tablet, phone and console.
Come learn our plans to deliver.
Another DirectX session hosted by Microsoft is titled DirectX: Direct3D Futures (emphasis mine):
Come learn how future changes to Direct3D will enable next generation games to run faster than ever before!
In this session we will discuss future improvements in Direct3D that will allow developers an unprecedented level of hardware control and reduced CPU rendering overhead across a broad ecosystem of hardware.
If you use cutting-edge 3D graphics in your games, middleware, or engines and want to efficiently build rich and immersive visuals, you don't want to miss this talk.
Now look at a line from our initial article on AMD Mantle when announced at its Hawaii tech day event:
It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.
This is all sounding very familiar. It would appear that Microsoft has finally been listening to the development community and is working on the performance aspects of DirectX. Likely due in no small part to the push of AMD and Mantle's development, an updated DirectX 12 that includes a similar feature set and similar performance changes would shift the market in a few key ways.
Is it time again for innovation with DirectX?
First and foremost, what does this do for AMD's Mantle in the near or distant future? For now, BF4 will still include Mantle support as will games like Thief (update pending) but going forward, if these DX12 changes are as specific as I am being led to believe, then it would be hard to see anyone really sticking with the AMD-only route. Of course, if DX12 doesn't really address the performance and overhead issues in the same way that Mantle does then all bets are off and we are back to square one.
Interestingly, OpenGL might also be getting into the ring with the session Approaching Zero Driver Overhead in OpenGL:
Driver overhead has been a frustrating reality for game developers for the entire life of the PC game industry. On desktop systems, driver overhead can decrease frame rate, while on mobile devices driver overhead is more insidious--robbing both battery life and frame rate. In this unprecedented sponsored session, Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today's OpenGL implementations that radically reduce driver overhead--by up to 10x or more. The techniques presented will apply to all major vendors and are suitable for use across multiple platforms. Additionally, they will demonstrate practical demos of the techniques in action in an extensible, open source comparison framework.
This description seems to indicate more about new or lesser known programming methods that can be used with OpenGL to lower overhead without the need for custom APIs or even DX12. This could be new modules from vendors or possibly a new revision to OpenGL - we'll find out next month.
All of this leaves us with a lot of questions that will hopefully be answered when we get to GDC in mid-March. Will this new version of DirectX be enough to reduce API overhead to appease even the stingiest of game developers? How will AMD react to this new competitor to Mantle (or was Mantle really only created to push this process along)? What time frame does Microsoft have on DX12? Does this save NVIDIA from any more pressure to build its own custom API?
Gaming continues to be the driving factor of excitement and innovation for the PC! Stay tuned for an exciting spring!
Podcast #285 - Frame Rating AMD Dual Graphics with Kaveri, Linux GPU Performance, and Dogecoin Mining!
Subject: General Tech | January 30, 2014 - 02:32 PM | Ken Addison
Tagged: podcast, frame rating, video, amd, Kaveri, A10 7850K, dual graphics, linux, opengl, Lenovo, IBM
PC Perspective Podcast #285 - 01/30/2014
Join us this week as we discuss Frame Rating AMD Dual Graphics with Kaveri, Linux GPU Performance, and Dogecoin Mining!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano
Week in Review:
News items of interest:
0:37:45 Quick Linux mention
And Motorola Mobility
Hardware/Software Picks of the Week:
Get notified when we go live!