GDC 14: OpenGL ES 3.1 Spec Released by Khronos Group

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 17, 2014 - 09:01 AM |
Tagged: OpenGL ES, opengl, Khronos, gdc 14, GDC

Today, day one of Game Developers Conference 2014, the Khronos Group has officially released the 3.1 specification for OpenGL ES. The main new feature, brought over from OpenGL 4, is the addition of compute shaders. This opens GPGPU functionality to mobile and embedded devices for applications developed in OpenGL ES, especially if the developer does not want to add OpenCL.

The update is backward-compatible with OpenGL ES 2.0 and 3.0 applications, allowing developers to add features, as available, for their existing apps. On the device side, most functionality is expected to be a driver update (in the majority of cases).

opengl-es-logo.png

OpenGL ES, standing for OpenGL for Embedded Systems but is rarely branded as such, delivers what they consider the most important features from the graphics library to the majority of devices. The Khronos Group has been working toward merging ES with the "full" graphics library over time. The last release, OpenGL ES 3.0, was focused on becoming a direct subset of OpenGL 4.3. This release expands upon the feature-space it occupies.

OpenGL ES also forms the basis for WebGL. The current draft of WebGL 2.0 uses OpenGL ES 3.0 although that was not discussed today. I have heard murmurs (not from Khronos) about some parties pushing for compute shaders in that specification, which this announcement puts us closer to.

The new specification also adds other features, such as the ability to issue a draw without CPU intervention. You could imagine a particle simulation, for instance, that wants to draw the result after its compute shader terminates. Shading is also less rigid, where vertex and fragment shaders do not need to be explicitly linked into a program before they are used. I inquired about the possibility that compute devices could be targetted (for devices with two GPUs) and possibly load balanced, in a similar method to WebCL but no confirmation or denial was provided (although he did mention that it would be interesting for apps that fall somewhere in the middle of OpenGL ES and OpenCL).

The OpenGL ES 3.1 spec is available at the Khronos website.

Source: Khronos

GDC 14: Valve's Steam Controller Is Similar to Dev Days

Subject: General Tech, Cases and Cooling, Shows and Expos | March 15, 2014 - 01:44 AM |
Tagged: GDC, gdc 14, valve, Steam Controller

Two months ago, Valve presented a new prototype of their Steam Controller with a significantly changed button layout. While the overall shape and two thumbpads remained constant, the touchscreen disappeared and the face buttons more closely resembled something from an Xbox or PlayStation. Another prototype image has been released, ahead of GDC, without many changes.

valve-DOG_controller_MED.jpg

Valve is still in the iteration process for its controller, however. Ten controllers will be available at GDC, each handmade. This version has been tested internally for some undisclosed amount of time, but this will be the first time that others will give their feedback since the design that was shown at CES. The big unknown is: to what level are they going to respond to feedback? Are we at the stage where it is about button sizing? Or, will it change radically - like to a two-slice toaster case with buttons inside the slots.

GDC is taking place March 17th through the 21st. The expo floor opens on the 19th.

GDC 14: Mozilla & Epic Games Run Unreal Engine 4 in Firefox

Subject: General Tech, Shows and Expos | March 12, 2014 - 09:17 PM |
Tagged: GDC, gdc 14, mozilla, epic games, unreal engine 4

Epic Games has been wanting Unreal Engine in the web browser for quite some time now. Back in 2011, the company presented their Citadel demo running in Flash 11.2. A short while later, Mozilla and Epic ported it to raw JavaScript and WebGL. With the help of asm.js, which is a series of optimizations for JavaScript, Unreal Engine 3 was at home in the browser at near-native speed, with no plugins. Epic's Tim Sweeney and Mark Rein, in an interview with GamaSutra, said that Unreal Engine 4 will take it beyond a demo and target web browsers as a supported platform.

Today, Mozilla teases Unreal Engine 4 running in Firefox, ahead of GDC.

Speaking of speed, asm.js can now reach within 67% of native performance and Mozilla is still optimizing their compiler. While it is difficult to write asm.js-compliant code by hand, companies like Epic are simply compiling their existing C/C++ code through Emscripten into that optimized Javascript. If you have a bit of CPU overhead in your native application, it could little more than a compile away from running in the web browser, possibly any web browser on any platform, without plugins. This obviously has great implications for timeless classics that would otherwise outlive its host platform.

Both Mozilla and Epic will have demos in their booths on the conference floor.

Source: Mozilla

Podcast #290 - Intel SSD 730, ASUS Maximus VI Formula, DirectX 12 and more!

Subject: General Tech | March 6, 2014 - 02:10 PM |
Tagged: video, podcast, asus, amd, AM1, Maximus VI Formula, Intel, ssd, SSD 730, DirectX 12, GDC, coolermaster, CMStorm, R9 290X, Bay Trail

PC Perspective Podcast #290 - 03/06/2014

Join us this week as we discuss the Intel SSD 730, ASUS Maximus VI Formula, DirectX 12 and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano

 
This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset!
 
Program length: 1:27:52
  1. Week in Review:
  2. 0:41:43 This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset
  3. News items of interest:
  4. Hardware/Software Picks of the Week:
  5. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

Microsoft, Along with AMD, Intel, NVIDIA, and Qualcomm, Will Announce DirectX 12 at GDC 2014

Subject: General Tech, Graphics Cards, Mobile, Shows and Expos | March 5, 2014 - 08:28 PM |
Tagged: qualcomm, nvidia, microsoft, Intel, gdc 14, GDC, DirectX 12, amd

The announcement of DirectX 12 has been given a date and time via a blog post on the Microsoft Developer Network (MSDN) blogs. On March 20th at 10:00am (I assume PDT), a few days into the 2014 Game Developers Conference in San Francisco, California, the upcoming specification should be detailed for attendees. Apparently, four GPU manufacturers will also be involved with the announcement: AMD, Intel, NVIDIA, and Qualcomm.

microsoft-dx12-gdc-announce.jpg

As we reported last week, DirectX 12 is expected to target increased hardware control and decreased CPU overhead for added performance in "cutting-edge 3D graphics" applications. Really, this is the best time for it. Graphics processors are mostly settled into highly-efficient co-processors of parallel data, with some specialized logic for geometry and video tasks. A new specification can relax the needs of video drivers and thus keep the GPU (or GPUs, in Mantle's case) loaded and utilized.

But, to me, the most interesting part of this announcement is the nod to Qualcomm. Microsoft values DirectX as leverage over other x86 and ARM-based operating systems. With Qualcomm, clearly Microsoft believes that either Windows RT or Windows Phone will benefit from the API's next version. While it will probably make PC gamers nervous, mobile platforms will benefit most from reducing CPU overhead, especially if it can be spread out over multiple cores.

Honestly, that is fine by me. As long as Microsoft returns to treating the PC as a first-class citizen, I do not mind them helping mobile, too. We will definitely keep you up to date as we know more.

Source: MSDN Blogs

DirectX 12 and a new OpenGL to challenge AMD Mantle coming at GDC?

Subject: Graphics Cards | February 26, 2014 - 06:17 PM |
Tagged: opengl, nvidia, Mantle, gdc 14, GDC, DirectX 12, DirectX, amd

UPDATE (2/27/14): AMD sent over a statement today after seeing our story.  

AMD would like you to know that it supports and celebrates a direction for game development that is aligned with AMD’s vision of lower-level, ‘closer to the metal’ graphics APIs for PC gaming. While industry experts expect this to take some time, developers can immediately leverage efficient API design using Mantle, and AMD is very excited to share the future of our own API with developers at this year’s Game Developers Conference.

Credit over to Scott and his reader at The Tech Report for spotting this interesting news today!!

It appears that DirectX and OpenGL are going to be announcing some changes at next month's Game Developers Conference in San Francisco.  According to some information found in the session details, both APIs are trying to steal some of the thunder from AMD's Mantle, recently released with the Battlefield 4 patch.  Mantle is na API was built by AMD to enable more direct access (lower level) to its GCN graphics hardware allowing developers to code games that are more efficient, providing better performance for the PC gamer.

mantle.jpg

From the session titled DirectX: Evolving Microsoft's Graphics Platform we find this description (emphasis mine):

For nearly 20 years, DirectX has been the platform used by game developers to create the fastest, most visually impressive games on the planet.

However, you asked us to do more. You asked us to bring you even closer to the metal and to do so on an unparalleled assortment of hardware. You also asked us for better tools so that you can squeeze every last drop of performance out of your PC, tablet, phone and console.

Come learn our plans to deliver.

Another DirectX session hosted by Microsoft is titled DirectX: Direct3D Futures (emphasis mine): 

Come learn how future changes to Direct3D will enable next generation games to run faster than ever before!

In this session we will discuss future improvements in Direct3D that will allow developers an unprecedented level of hardware control and reduced CPU rendering overhead across a broad ecosystem of hardware. 

If you use cutting-edge 3D graphics in your games, middleware, or engines and want to efficiently build rich and immersive visuals, you don't want to miss this talk.

Now look at a line from our initial article on AMD Mantle when announced at its Hawaii tech day event:

It bypasses DirectX (and possibly the hardware abstraction layer) and developers can program very close to the metal with very little overhead from software.

This is all sounding very familiar.  It would appear that Microsoft has finally been listening to the development community and is working on the performance aspects of DirectX.  Likely due in no small part to the push of AMD and Mantle's development, an updated DirectX 12 that includes a similar feature set and similar performance changes would shift the market in a few key ways.

olddirectx.jpg

Is it time again for innovation with DirectX?

First and foremost, what does this do for AMD's Mantle in the near or distant future?  For now, BF4 will still include Mantle support as will games like Thief (update pending) but going forward, if these DX12 changes are as specific as I am being led to believe, then it would be hard to see anyone really sticking with the AMD-only route.  Of course, if DX12 doesn't really address the performance and overhead issues in the same way that Mantle does then all bets are off and we are back to square one.

Interestingly, OpenGL might also be getting into the ring with the session Approaching Zero Driver Overhead in OpenGL

Driver overhead has been a frustrating reality for game developers for the entire life of the PC game industry. On desktop systems, driver overhead can decrease frame rate, while on mobile devices driver overhead is more insidious--robbing both battery life and frame rate. In this unprecedented sponsored session, Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today's OpenGL implementations that radically reduce driver overhead--by up to 10x or more. The techniques presented will apply to all major vendors and are suitable for use across multiple platforms. Additionally, they will demonstrate practical demos of the techniques in action in an extensible, open source comparison framework.

This description seems to indicate more about new or lesser known programming methods that can be used with OpenGL to lower overhead without the need for custom APIs or even DX12.  This could be new modules from vendors or possibly a new revision to OpenGL - we'll find out next month.

All of this leaves us with a lot of questions that will hopefully be answered when we get to GDC in mid-March.  Will this new version of DirectX be enough to reduce API overhead to appease even the stingiest of game developers?  How will AMD react to this new competitor to Mantle (or was Mantle really only created to push this process along)?  What time frame does Microsoft have on DX12?  Does this save NVIDIA from any more pressure to build its own custom API?

Gaming continues to be the driving factor of excitement and innovation for the PC!  Stay tuned for an exciting spring!

Source: Tech Report

GDC 13: Intel announces PixelSync and InstantAccess

Subject: General Tech, Shows and Expos | March 27, 2013 - 08:51 PM |
Tagged: Intel, GDC 13, GDC

GDC 2013 is where the industry comes together to talk about the technology itself. Intel was there, and of course the big blue just had to unveil developments to help them in the PC gaming space. Two new major rendering extensions and updated developer tools debut. And, if you are not a developer, encode your movies with handbrake quicker!

Intel-logo.svg_.png

First up is PixelSync, a DirectX extension for Intel HD Graphics. PixelSync is designed to be used with smoke, hair, and other effects which require blending translucent geometry. With this extension, objects do not need to be sorted before compositing.

Next up is InstantAccess. This DirectX extension allows CPU and integrated GPUs to access the same physical memory. What interests me most about InstantAccess is the ability for developers to write GPU-based applications which can quickly access the same memory as its CPU counterpart. Should the integrated GPU be visible alongside discrete GPUs, this could allow the integrated graphics to help offload GPGPU tasks such as physics while the CPU and discrete GPU handle the more important tasks.

Also updated is their Graphics Performance Analyzers toolset. If you are interested in performance optimization on your software, be sure to check those out.

And for the more general public... Handbrake is now set up to take advantage of Quick Sync Video. Given the popularity of Handbrake, this is quite a big deal for anyone wishing to transcode video using popular and free encoders.

Source: Intel

GDC 2013: AMD Announces Sky Graphics Cards to Accelerate Cloud Gaming

Subject: General Tech, Graphics Cards | March 27, 2013 - 08:16 PM |
Tagged: sky graphics, sky 900, RapidFire, radeon sky, pc gaming, GDC, cloud gaming, ciinow, amd

AMD is making a new push into cloud gaming with a new series of Radeon graphics cards called Sky. The new cards feature a (mysterious) technology called "RapidFire" that allegedly provides "highly efficient and responsive game streaming" from servers to your various computing devices (tablets, PCs, Smart TVs) over the Internet. At this year's Games Developers Conference (GDC), the company announced that it is working with a number of existing cloud gaming companies to provide hardware and drivers to reduce latency.

AMD Sky Graphics In The Cloud.jpg

AMD is working with Otoy, G-Cluster, Ubitus, and CiiNow. CiiNow in particular was heavily discussed by AMD, and can reportedly provide lower latency than cloud gaming competitor Gaikai. AMD Sky is, in many ways, similar in scope to NVIDIA's GRID technology which was announced last year and shown off at GTC last week. Obviously, that has given NVIDIA a head start, but it is difficult to say how AMD's technology will stack up as the company is not yet providing any specifics. Joystiq was able to obtain information on the high-end Radeon Sky graphics card, however (that's something at least...). The Sky 900 reportedly features 3,584 stream processors, 6GB of GDDR5 RAM, and 480 GB/s of bandwidth. Further, AMD has indicated that the new Radeon Sky cards will be based on the company's Graphics Core Next architecture.

  Sky 900 Radeon 7970
Stream Processors 3,584 2,048
Memory 6GB 3GB
Memory Bandwidth 480GB/s 264GB/s

I think it is safe to assume that the Sky cards will be sold to other cloud gaming companies. They will not be consumer cards, and AMD is not going to get into the cloud gaming business itself. Beyond that, AMD's Sky cloud gaming initiative is still a mystery. Hopefully more details will filter out between now and the AMD Fusion Developer Summit this summer.

Source: Joystiq

GDC 13: 17 Minute Battlefield 4 Trailer Released

Subject: Editorial, General Tech, Shows and Expos | March 27, 2013 - 03:25 AM |
Tagged: battlefield, battlefield 4, GDC, GDC 13

bf4_promo_image.jpg

Battlefield 4 is coming, that has been known with Medal of Honor: Warfighter's release and its promise of beta access, but the gameplay trailer is already here. Clocking in at just over 17 minutes, "Fishing in Baku" looks amazing from a technical standpoint.

The video has been embed below. A little not safe for work due to language and amputation.

Now that you finished gawking, we have gameplay to discuss. I cannot help but be disappointed with the campaign direction. Surely, the story was in planning prior to the release of Battlefield 3. Still, it seems to face the same generic-character problem which struck the last campaign.

In Battlefield 3, I really could not recognize many characters apart from the lead which made their deaths more confusing than upsetting. Normally when we claim a character is identifiable, we mean that we can relate to them. In this case, when I say the characters were not identifiable, I seriously mean that I probably could not pick them out in a police lineup.

Then again, the leaked promotional image for Battlefield 4 seems to show Blackburn at the helm. I guess there is some hope. Slim hope, which the trailer does not contribute to. I mean even the end narration capped how pointless the character interactions were. All this in spite of EA's proclaiming YouTube description of this being human, dramatic, and believable.

Oh well, it went boom good.

GTX 680, Turbo Cores, and Cuda Cores!

Subject: Graphics Cards | March 8, 2012 - 06:59 PM |
Tagged: nvidia, kepler, gtx 680, GDC

It seems that there have been a few leaks on NVIDIA's first Kepler based product.  Techpowerup and Extreme Tech are both reporting on leaks that apparently came from Cebit and some of NVIDIA's partners.  We now have a much better idea what the GTX 680 is all about.

mark-rein-kepler-nvidia-gk104.jpg

Epic's Mark Rein is showing off his own GTX 680 which successfully ran their Samaritan Demo.  It is wrapped for his protection.  (Image courtesy of Extreme Tech)

The chip that powers the GTX 680 is the GK104, and it is oddly enough the more "midrange/enthusiast" offering.  It has a total of 1536 CUDA cores, runs at 703 MHz core and 1406 MHz hot clock, has a 256 bit memory bus pumping out 196 GB/sec, and has a new and interesting feature that is quite a bit like the Turbo core functionality we see from both AMD and Intel in their CPUs.  Apparently when a scene gets very complex, the chip is able to overclock itself up to 900 MHz core/1800 MHz hot clock.  It will stay there for either as long as the scene needs it, or the chip approaches its upper TDP limit.

These reports paint the GTX 680 as being about 10% faster than the HD 7970 in certain applications, but in others it is slower.  I figure that when reviews are finally released the two cards will have traded blows with each other over who has the fastest graphics card.  Let's call it a draw.

The GTX 680 should be unveiled in the next week or so, but initial reviews will not surface until later in the month.  Retail availability will be relegated until then, but with the issues that TSMC has had with their 28 nm process (it has been stopped since the middle of February) we have no idea how much product NVIDIA and its partners has.  Things could be scarce after the introduction for some time.

Source: NVIDIA