Khronos Announces "Next" OpenGL & Releases OpenGL 4.5

Subject: General Tech, Graphics Cards, Shows and Expos | August 15, 2014 - 05:33 PM |
Tagged: siggraph 2014, Siggraph, OpenGL Next, opengl 4.5, opengl, nvidia, Mantle, Khronos, Intel, DirectX 12, amd

Let's be clear: there are two stories here. The first is the release of OpenGL 4.5 and the second is the announcement of the "Next Generation OpenGL Initiative". They both occur on the same press release, but they are two, different statements.

OpenGL 4.5 Released

OpenGL 4.5 expands the core specification with a few extensions. Compatible hardware, with OpenGL 4.5 drivers, will be guaranteed to support these. This includes features like direct_state_access, which allows accessing objects in a context without binding to it, and support of OpenGL ES3.1 features that are traditionally missing from OpenGL 4, which allows easier porting of OpenGL ES3.1 applications to OpenGL.

opengl_logo.jpg

It also adds a few new extensions as an option:

ARB_pipeline_statistics_query lets a developer ask the GPU what it has been doing. This could be useful for "profiling" an application (list completed work to identify optimization points).

ARB_sparse_buffer allows developers to perform calculations on pieces of generic buffers, without loading it all into memory. This is similar to ARB_sparse_textures... except that those are for textures. Buffers are useful for things like vertex data (and so forth).

ARB_transform_feedback_overflow_query is apparently designed to let developers choose whether or not to draw objects based on whether the buffer is overflowed. I might be wrong, but it seems like this would be useful for deciding whether or not to draw objects generated by geometry shaders.

KHR_blend_equation_advanced allows new blending equations between objects. If you use Photoshop, this would be "multiply", "screen", "darken", "lighten", "difference", and so forth. On NVIDIA's side, this will be directly supported on Maxwell and Tegra K1 (and later). Fermi and Kepler will support the functionality, but the driver will perform the calculations with shaders. AMD has yet to comment, as far as I can tell.

nvidia-opengl-debugger.jpg

Image from NVIDIA GTC Presentation

If you are a developer, NVIDIA has launched 340.65 (340.23.01 for Linux) beta drivers for developers. If you are not looking to create OpenGL 4.5 applications, do not get this driver. You really should not have any use for it, at all.

Next Generation OpenGL Initiative Announced

The Khronos Group has also announced "a call for participation" to outline a new specification for graphics and compute. They want it to allow developers explicit control over CPU and GPU tasks, be multithreaded, have minimal overhead, have a common shader language, and "rigorous conformance testing". This sounds a lot like the design goals of Mantle (and what we know of DirectX 12).

amd-mantle-queues.jpg

And really, from what I hear and understand, that is what OpenGL needs at this point. Graphics cards look nothing like they did a decade ago (or over two decades ago). They each have very similar interfaces and data structures, even if their fundamental architectures vary greatly. If we can draw a line in the sand, legacy APIs can be supported but not optimized heavily by the drivers. After a short time, available performance for legacy applications would be so high that it wouldn't matter, as long as they continue to run.

Add to it, next-generation drivers should be significantly easier to develop, considering the reduced error checking (and other responsibilities). As I said on Intel's DirectX 12 story, it is still unclear whether it will lead to enough performance increase to make most optimizations, such as those which increase workload or developer effort in exchange for queuing fewer GPU commands, unnecessary. We will need to wait for game developers to use it for a bit before we know.

Prying OpenGL to slip a little Mantle inside

Subject: General Tech | August 15, 2014 - 10:09 AM |
Tagged: amd, Mantle, opengl, OpenGL Next

Along with his announcements about FreeSync, Richard Huddy also discussed OpenGL Next and its relationship with Mantle and the role it played in DirectX 12's development.  AMD has given Chronos Group, the developers of OpenGL, complete access to Mantle to help them integrate it into future versions of the API starting with OpenGL Next.  He also discussed the advantages of Mantle over DirectX, citing AMD's ability to update it much more frequently than Intel has done with DX.  With over 75 developers working on titles that take advantage of Mantle the interest is definitely there but it is uncertain if devs will actually benefit from an API which updates at a pace faster than a game can be developed.  Read on at The Tech Report.

Richard Huddy-578-80.jpg

"At Siggraph yesterday, AMD's Richard Huddy gave us an update on Mantle, and he also revealed some interesting details about AMD's role in the development of the next-gen OpenGL API."

Here is some more Tech News from around the web:

Tech Talk

Richard Huddy Discusses FreeSync Availability Timeframes

Subject: General Tech, Displays | August 14, 2014 - 01:59 PM |
Tagged: amd, freesync, g-sync, Siggraph, siggraph 2014

At SIGGRAPH, Richard Huddy of AMD announced the release windows of FreeSync, their adaptive refresh rate technology, to The Tech Report. Compatible monitors will begin sampling "as early as" September. Actual products are expected to ship to consumers in early 2015. Apparently, more than one display vendor is working on support, although names and vendor-specific release windows are unannounced.

amd-freesync1.jpg

As for cost of implementation, Richard Huddy believes that the added cost should be no more than $10-20 USD (to the manufacturer). Of course, the final price to end-users cannot be derived from this - that depends on how quickly the display vendor expects to sell product, profit margins, their willingness to push new technology, competition, and so forth.

If you want to take full advantage of FreeSync, you will need a compatible GPU (look for "gaming" support in AMD's official FreeSync compatibility list). All future AMD GPUs are expected to support the technology.

Source: Tech Report

Podcast #313 - New Kaveri APUs, ASUS ROG Swift G-Sync Monitor, Intel Core M Processors and more!

Subject: General Tech | August 14, 2014 - 12:30 PM |
Tagged: video, ssd, ROG Swift, ROG, podcast, ocz, nvidia, Kaveri, Intel, g-sync, FMS 2014, crossblade ranger, core m, Broadwell, asus, ARC 100, amd, A6-7400K, A10-7800, 14nm

PC Perspective Podcast #313 - 08/14/2014

Join us this week as we discuss new Kaveri APUs, ASUS ROG Swift G-Sync Monitor, Intel Core M Processors and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Allyn Malventano

Program length: 1:41:24
 

Subscribe to the PC Perspective YouTube Channel for more videos, reviews and podcasts!!

 

Diamond's Xtreme Sound XS71HDU looks good but how does it sound?

Subject: General Tech | August 14, 2014 - 12:00 PM |
Tagged: audio, diamond multimedia, Xtreme Sound XS71HDU, usb sound card, DAC

The Diamond Xtreme Sound XS71HDU could be a versitile $60 solution for those with high end audio equipment that would benefit from a proper DAC.  With both optical in and out it is capable of more than an onboard solution, not to mention the six 3.5-mm jacks for stereo headphones, 7.1 surround support with rear, sub, side, mic, and line in.  The design and features are impressive however the performance failed to please The Tech Report who felt that there were similar solutions with much higher quality sound reproduction.

back.jpg

"We love sound cards here at TR, but they don't fit in every kind of PC. Diamond's Xtreme Sound XS71HDU serves up the same kinds of features in a tiny USB package suitable for mini-PCs and ultrabooks. We took it for a spin to see if it's as good as it looks."

Here is some more Tech News from around the web:

Audio Corner

That Linux thing that nobody uses

Subject: General Tech | August 14, 2014 - 10:31 AM |
Tagged: linux

For many Linux is a mysterious thing that is either dead or about to die because no one uses it.  Linux.com has put together an overview of what Linux is and where to find it being used.  Much of what they describe in the beginning applies to all operating systems as they share similar features, it is only in the details that they differ.  If you have only thought about Linux as that OS that you can't game on then it is worth taking a look through the descriptions of the distributions and why people choose to use Linux.  You may never build a box which runs Linux but if you are considering buying a Steambox when they arrive on the market you will find yourself using a type of Linux and having a basic understanding of the parts of the OS for troubleshooting and optimization.   If you already use Linux then fire up Steam and take a break.

software-center.png

"For those in the know, you understand that Linux is actually everywhere. It's in your phones, in your cars, in your refrigerators, your Roku devices. It runs most of the Internet, the supercomputers making scientific breakthroughs, and the world's stock exchanges."

Here is some more Tech News from around the web:

Tech Talk

Source: Linux.com

Intel and Microsoft Show DirectX 12 Demo and Benchmark

Subject: General Tech, Graphics Cards, Processors, Mobile, Shows and Expos | August 13, 2014 - 06:55 PM |
Tagged: siggraph 2014, Siggraph, microsoft, Intel, DirectX 12, directx 11, DirectX

Along with GDC Europe and Gamescom, Siggraph 2014 is going on in Vancouver, BC. At it, Intel had a DirectX 12 demo at their booth. This scene, containing 50,000 asteroids, each in its own draw call, was developed on both Direct3D 11 and Direct3D 12 code paths and could apparently be switched while the demo is running. Intel claims to have measured both power as well as frame rate.

intel-dx12-LockedFPS.png

Variable power to hit a desired frame rate, DX11 and DX12.

The test system is a Surface Pro 3 with an Intel HD 4400 GPU. Doing a bit of digging, this would make it the i5-based Surface Pro 3. Removing another shovel-load of mystery, this would be the Intel Core i5-4300U with two cores, four threads, 1.9 GHz base clock, up-to 2.9 GHz turbo clock, 3MB of cache, and (of course) based on the Haswell architecture.

While not top-of-the-line, it is also not bottom-of-the-barrel. It is a respectable CPU.

Intel's demo on this processor shows a significant power reduction in the CPU, and even a slight decrease in GPU power, for the same target frame rate. If power was not throttled, Intel's demo goes from 19 FPS all the way up to a playable 33 FPS.

Intel will discuss more during a video interview, tomorrow (Thursday) at 5pm EDT.

intel-dx12-unlockedFPS-1.jpg

Maximum power in DirectX 11 mode.

For my contribution to the story, I would like to address the first comment on the MSDN article. It claims that this is just an "ideal scenario" of a scene that is bottlenecked by draw calls. The thing is: that is the point. Sure, a game developer could optimize the scene to (maybe) instance objects together, and so forth, but that is unnecessary work. Why should programmers, or worse, artists, need to spend so much of their time developing art so that it could be batch together into fewer, bigger commands? Would it not be much easier, and all-around better, if the content could be developed as it most naturally comes together?

That, of course, depends on how much performance improvement we will see from DirectX 12, compared to theoretical max efficiency. If pushing two workloads through a DX12 GPU takes about the same time as pushing one, double-sized workload, then it allows developers to, literally, perform whatever solution is most direct.

intel-dx12-unlockedFPS-2.jpg

Maximum power when switching to DirectX 12 mode.

If, on the other hand, pushing two workloads is 1000x slower than pushing a single, double-sized one, but DirectX 11 was 10,000x slower, then it could be less relevant because developers will still need to do their tricks in those situations. The closer it gets, the fewer occasions that strict optimization is necessary.

If there are any DirectX 11 game developers, artists, and producers out there, we would like to hear from you. How much would a (let's say) 90% reduction in draw call latency (which is around what Mantle claims) give you, in terms of fewer required optimizations? Can you afford to solve problems "the naive way" now? Some of the time? Most of the time? Would it still be worth it to do things like object instancing and fewer, larger materials and shaders? How often?

Podcast Listeners and Viewers: Win an EVGA SuperNOVA 1000 G2 Power Supply

Subject: General Tech | August 13, 2014 - 02:15 PM |
Tagged: supernova, podcast, giveaway, evga, contest

A big THANK YOU goes to our friends at EVGA for hooking us up with another item to give away for our podcast listeners and viewers this week. If you watch tonight's LIVE recording of Podcast #313 (10pm ET / 7pm PT at http://pcper.com/live) or download our podcast after the fact (at http://pcper.com/podcast) then you'll have the tools needed to win an EVGA SuperNOVA 1000 G2 Power Supply!! (Valued at $165 based on Amazon current selling price.) See review of our 750/850G2 SuperNOVA units.

120-G2-1000-XR_XL_1.jpg

How do you enter? Well, on the live stream (or in the downloaded version) we'll give out a special keyword during our discussion of the contest for you to input in the form below. That's it! 

We'll draw a random winner next week, anyone can enter from anywhere in the world - we'll cover the shipping. We'll draw a winner on August 20th and announce it on the next episode of the podcast! Good luck, and once again, thanks goes out to EVGA for supplying the prize!

Unreal Tournament's Training Day, get in on the Pre-Pre-Alpha right now!

Subject: General Tech | August 13, 2014 - 11:02 AM |
Tagged: Unreal Tournament, gaming, Alpha

Feel like (Pre-Pre-)Alpha testing Unreal Tournament without forking money over for early access?  No problems thanks to Epic and Unreal Forums member ‘raxxy’ who is compiling and updating the (pre)Alpha version of the next Unreal Tournament.  Sure there may not be many textures but there is a Flak Cannon so what could you possible have to complain about?  There are frequent updates and a major part of participating is to give feedback to the devs so please be sure to check into the #beyondunreal IRC channel to get tips and offer feedback.  Rock, Paper, SHOTGUN reports that the severs are massively packed now so you may not be able to immediately join in but it is worth trying.

raxxy would like you to understand "These are PRE-ALPHA Prototype Builds. Seriously. Super early testing. So early it's technically not even pre alpha, it's debug code!"

You can be guaranteed that the Fragging Frogs will be taking advantage of this, as well as revisiting the much beloved UT2K4 so if you haven't joined up yet ... what are you waiting for?

Check out Fatal1ty playing if you can't get on

"Want to play the new Unreal Tournament for free, right this very second? Cor blimey and OMG you totes can! Hero of the people ‘raxxy’ on the Unreal Forums is compiling Epic’s builds and releasing them as small, playable packages that anyone can run, with multiple updates per week. The maps are untextured, the weapons unbalanced, and things change rapidly as everything’s still “pre-alpha” but it’s playable and – more importantly – fun."

Here is some more Tech News from around the web:

Gaming

Meet Tonga, soon to be your new Radeon

Subject: General Tech | August 13, 2014 - 09:58 AM |
Tagged: tonga, radeon, FirePro W7100, amd

A little secret popped out with the release of AMD's FirePro W7100, a new family of GPU that goes by the name of Tonga, which is very likely to replace the aging Tahiti chip that has been used since the HD 7900 series.  The stats that The Tech Report saw show interesting changes from Tahiti including a reduction of the memory interface to 256-bit which is in line with NVIDIA's current offerings.  The number of stream processors might be reduced to 1792 from 2048 but that is based on the W7100 and it the GPUs may be released with the full 32 GCN compute units.  Many other features have seen increases, the number of Asynchronous Compute Engines goes from 2 to 8, the number of rasterized triangles per clock doubles to 4 and it adds support for the new TrueAudio DSP and CrossFire XDMA.

IMG0045305.jpg

"The bottom line is that Tonga joins the Hawaii (Radeon R9 290X) and Bonaire (R7 260X) chips as the only members of AMD' s GCN 1.1 series of graphics processors. Tonga looks to be a mid-sized GPU and is expected to supplant the venerable Tahiti chip used in everything from the original Radeon HD 7970 to the current Radeon R9 280."

Here is some more Tech News from around the web:

Tech Talk