GDC 15: Quixel Advertises MEGASCANS with Jungle Scene

Subject: General Tech | March 1, 2015 - 09:11 PM |
Tagged: quixel, GDC, gdc 15, ue4, unreal engine 4, gdc 2015

You know that a week will be busy when companies start announcing a day or two early to beat the flood. While Game Developers Conference starts tomorrow, Quixel published their Jungle demo to YouTube today in promotion of their MEGASCANS material library. The video was rendered in Unreal Engine 4.

Their other material samples look quite convincing. The vines on a wall (column in this case) is particularly interesting because it even looks like two distinct layers, despite being a single mesh with displacement as far as I can tell. I don't know, maybe it is two or three layers. It would certainly make sense if it was, but the top and bottom suggests that it is single, and that is impressive. It even looks self-occluding.

quixel-megascan.png

Pricing and availability for the library is not yet disclosed, but it sounds like it will be a subscription service. The software ranges from $25 to $500, depending on what you get and what sort of license you need (Academic vs Commercial and so forth).

Source: Quixel

HTC Announces Vive VR Headset in Partnership with Valve

Subject: General Tech, Mobile, Shows and Expos | March 1, 2015 - 05:16 PM |
Tagged: MWC, mwc 15, GDC, gdc 15, htc, valve, vive, vive vr, Oculus

Mobile World Congress (MWC) and Game Developers Conference (GDC) severely overlap this year, and not just in dates apparently. HTC just announced the Vive VR headset at MWC, which was developed alongside Valve. The developer edition will contain two 1200x1080 displays with a 90Hz refresh rate, and it will launch this spring. The consumer edition will launch this holiday. They made sure to underline 2015, so you know they're serious. Want more information? Well that will be for Valve to discuss at GDC.

htc-valve-vr.jpg

The confusing part: why is this not partnered with Oculus? When Michael Abrash left Valve to go there, I assumed that it was Valve shedding its research to Facebook's subsidiary and letting them take the hit. Now, honestly, it seems like Facebook just poached Abrash, Valve said “oh well”, and the two companies kept to their respective research. Who knows? Maybe that is not the case. We might find out more at GDC, but you would expect that Oculus would be mentioned if they had any involvement at all.

Valve will host an event on the second official day of GDC, March 3rd at 3pm. In other words, Valve will make an announcement on 3/3 @ 3. Could it involve Left 4 Dead 3? Portal 3? Will they pull a Crytek and name their engine Source 3? Are they just trolling absolutely everyone? Will it have something to do with NVIDIA's March 3rd announcement? Do you honestly think I have any non-speculative information about this? No. No I don't. There, I answered one of those questions.

Source: HTCVR

NVIDIA Event on March 3rd. Why?

Subject: Graphics Cards, Mobile, Shows and Expos | February 11, 2015 - 03:25 PM |
Tagged: Tegra X1, nvidia, mwc 15, MWC, gdc 2015, GDC, DirectX 12

On March 3rd, NVIDIA will host an event called “Made to Game”. Invitations have gone out to numerous outlets, including Android Police, who published a censored screenshot of it. This suggests that it will have something to do with the Tegra X1, especially since the date is the day after Mobile World Congress starts. Despite all of this, I think it is for something else entirely.

nvidia-march-3-2015-event.png

Image Credit: Android Police

Allow me to highlight two points. First, Tech Report claims that the event is taking place in San Francisco, which is about as far away from Barcelona, Spain as you can get. It is close to GDC however, which takes also starts on March 2nd. If this was going to align with Mobile World Congress, you ideally would not want attendees to take 14-hour flight for a day trip.

Second, the invitation specifically says: “More than 5 years in the making, what I want to share with you will redefine the future of gaming.” Compare that to the DirectX 12 announcement blog post on March 20th of last year (2014): “Our work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC ((2014)).”

So yeah, while it might involve the Tegra X1 processor for Windows 10 on mobile devices, which is the only reason I can think of that they would want Android Police there apart from "We're inviting everyone everywhere", I expect that this event is for DirectX 12. I assume that Microsoft would host their own event that involves many partners, but I could see NVIDIA having a desire to save a bit for something of their own. What would that be? No idea.

glNext Initiative Unveiled at GDC 2015

Subject: Graphics Cards, Shows and Expos | February 4, 2015 - 03:33 AM |
Tagged: OpenGL Next, opengl, glnext, gdc 2015, GDC

The first next-gen, released graphics API was Mantle, which launched a little while after Battlefield 4, but the SDK is still invite-only. The DirectX 12 API quietly launched with the recent Windows 10 Technical Preview, but no drivers, SDK, or software (that we know about) are available to the public yet. The Khronos Group has announced their project, and that's about it currently.

opengl_logo.jpg

According to Develop Magazine, the GDC event listing, and participants, the next OpenGL (currently called “glNext initiative”) will be unveiled at GDC 2015. The talk will be presented by Valve, but it will also include Epic Games, who was closely involved in DirectX 12 with Unreal Engine, Oxide Games and EA/DICE, who were early partners with AMD on Mantle, and Unity, who recently announced support for DirectX 12 when it launches with Windows 10. Basically, this GDC talk includes almost every software developer that came out in early support of either DirectX 12 or Mantle, plus Valve. Off the top of my head, I can only think of FutureMark as unlisted. On the other hand, while they will obviously have driver support from at least one graphics vendor, none are listed. Will we see NVIDIA? Intel? AMD? All of the above? We don't know.

When I last discussed the next OpenGL initiative, it was attempting to parse the naming survey to figure out bits of the technology itself. As it turns out, the talk claims to go deep into the API, with demos, examples, and “real-world applications running on glNext drivers and hardware”. If this information makes it out (and some talks remain private unfortunately although this one looks public) then we should know more about it than what we know about any competing API today. Personally, I am hoping that they spent a lot of effort on the GPGPU side of things, sort-of building graphics atop it rather than having them be two separate entities. This would be especially good if it could be sandboxed for web applications.

This could get interesting.

Source: GDC

GDC 2014: Shader-limited Optimization for AMD's GCN

Subject: Editorial, General Tech, Graphics Cards, Processors, Shows and Expos | March 30, 2014 - 01:45 AM |
Tagged: gdc 14, GDC, GCN, amd

While Mantle and DirectX 12 are designed to reduce overhead and keep GPUs loaded, the conversation shifts when you are limited by shader throughput. Modern graphics processors are dominated by sometimes thousands of compute cores. Video drivers are complex packages of software. One of their many tasks is converting your scripts, known as shaders, into machine code for its hardware. If this machine code is efficient, it could mean drastically higher frame rates, especially at extreme resolutions and intense quality settings.

amd-gcn-unit.jpg

Emil Persson of Avalanche Studios, probably known best for the Just Cause franchise, published his slides and speech on optimizing shaders. His talk focuses on AMD's GCN architecture, due to its existence in both console and PC, while bringing up older GPUs for examples. Yes, he has many snippets of GPU assembly code.

AMD's GCN architecture is actually quite interesting, especially dissected as it was in the presentation. It is simpler than its ancestors and much more CPU-like, with resources mapped to memory (and caches of said memory) rather than "slots" (although drivers and APIs often pretend those relics still exist) and with how vectors are mostly treated as collections of scalars, and so forth. Tricks which attempt to combine instructions together into vectors, such as using dot products, can just put irrelevant restrictions on the compiler and optimizer... as it breaks down those vector operations into those very same component-by-component ops that you thought you were avoiding.

Basically, and it makes sense coming from GDC, this talk rarely glosses over points. It goes over execution speed of one individual op compared to another, at various precisions, and which to avoid (protip: integer divide). Also, fused multiply-add is awesome.

I know I learned.

As a final note, this returns to the discussions we had prior to the launch of the next generation consoles. Developers are learning how to make their shader code much more efficient on GCN and that could easily translate to leading PC titles. Especially with DirectX 12 and Mantle, which lightens the CPU-based bottlenecks, learning how to do more work per FLOP addresses the other side. Everyone was looking at Mantle as AMD's play for success through harnessing console mindshare (and in terms of Intel vs AMD, it might help). But honestly, I believe that it will be trends like this presentation which prove more significant... even if behind-the-scenes. Of course developers were always having these discussions, but now console developers will probably be talking about only one architecture - that is a lot of people talking about very few things.

This is not really reducing overhead; this is teaching people how to do more work with less, especially in situations (high resolutions with complex shaders) where the GPU is most relevant.

Podcast #293 - NVIDIA Titan-Z, ASUS ROG Poseidon 780, News from OculusVR and more!

Subject: General Tech | March 27, 2014 - 02:42 PM |
Tagged: W9100, video, titan z, poseidon 780, podcast, Oculus, nvidia, GTC, GDC

PC Perspective Podcast #293 - 03/27/2014

Join us this week as we discuss the NVIDIA Titan-Z, ASUS ROG Poseidon 780, News from OculusVR and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

  • iTunes - Subscribe to the podcast directly through the iTunes Store
  • RSS - Subscribe through your regular RSS reader
  • MP3 - Direct download link to the MP3 file

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath and Allyn Malventano

 
This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset!
 
Program length: 1:19:03
  1. Week in Review:
    1. 0:10:45 Microsoft's DirectX 12 (Live Blog)
  2. 0:37:07 This podcast is brought to you by Coolermaster, and the CM Storm Pulse-R Gaming Headset
  3. News items of interest:
  4. Hardware/Software Picks of the Week:
    1. Josh: Certainly not a Skype Connection to the Studio
  5. Closing/outro

Be sure to subscribe to the PC Perspective YouTube channel!!

 

GDC wasn't just about DirectX; OpenGL was also a hot topic

Subject: General Tech | March 24, 2014 - 12:26 PM |
Tagged: opengl, nvidia, gdc 14, GDC, amd, Intel

DX12 and its Mantle-like qualities garnered the most interest from gamers at GDC but an odd trio of companies were also pushing a different API.  OpenGL has been around for over 20 years and has waged a long war against Direct3D, a war which may be intensifying again.  Representatives from Intel, AMD and NVIDIA all took to the stage to praise the new OpenGL standard, suggesting that with a tweaked implementation of OpenGL developers could expect to see performance increases between 7 to 15 times.  The Inquirer has embedded an hour long video in their story, check it out to learn more.

slide-1-638.jpg

"CHIP DESIGNERS AMD, Intel and Nvidia teamed up to tout the advantages of the OpenGL multi-platform application programming interface (API) at this year's Game Developers Conference (GDC)."

Here is some more Tech News from around the web:

Tech Talk

Source: The Inquirer
Author:
Manufacturer: NVIDIA

DX11 could rival Mantle

The big story at GDC last week was Microsoft’s reveal of DirectX 12 and the future of the dominant API for PC gaming.  There was plenty of build up to the announcement with Microsoft’s DirectX team posting teasers and starting up a Twitter account of the occasion. I hosted a live blog from the event which included pictures of the slides. It was our most successful of these types of events with literally thousands of people joining in the conversation. Along with the debates over the similarities of AMD’s Mantle API and the timeline for DX12 release, there are plenty of stories to be told.

After the initial session, I wanted to setup meetings with both AMD and NVIDIA to discuss what had been shown and get some feedback on the planned direction for the GPU giants’ implementations.  NVIDIA presented us with a very interesting set of data that both focused on the future with DX12, but also on the now of DirectX 11.

15.jpg

The reason for the topic is easy to decipher – AMD has built up the image of Mantle as the future of PC gaming and, with a full 18 months before Microsoft’s DirectX 12 being released, how developers and gamers respond will make an important impact on the market. NVIDIA doesn’t like to talk about Mantle directly, but it’s obvious that it feels the need to address the questions in a roundabout fashion. During our time with NVIDIA’s Tony Tamasi at GDC, the discussion centered as much on OpenGL and DirectX 11 as anything else.

What are APIs and why do you care?

For those that might not really understand what DirectX and OpenGL are, a bit of background first. APIs (application programming interface) are responsible for providing an abstraction layer between hardware and software applications.  An API can deliver consistent programming models (though the language can vary) and do so across various hardware vendors products and even between hardware generations.  They can provide access to feature sets of hardware that have a wide range in complexity, but allow users access to hardware without necessarily knowing great detail about it.

Over the years, APIs have developed and evolved but still retain backwards compatibility.  Companies like NVIDIA and AMD can improve DirectX implementations to increase performance or efficiency without adversely (usually at least) affecting other games or applications.  And because the games use that same API for programming, changes to how NVIDIA/AMD handle the API integration don’t require game developer intervention.

With the release of AMD Mantle, the idea of a “low level” API has been placed in the minds of gamers and developers.  The term “low level” can mean many things, but in general it is associated with an API that is more direct, has a thinner set of abstraction layers, and uses less translation from code to hardware.  The goal is to reduce the amount of overhead (performance hit) that APIs naturally impair for these translations.  With additional performance available, the CPU cycles can be used by the program (game) or be slept to improve battery life. In certain cases, GPU throughput can increase where the API overhead is impeding the video card's progress.

Passing additional control to the game developers, away from the API or GPU driver developers, gives those coders additional power and improves the ability for some vendors to differentiate. Interestingly, not all developers want this kind of control as it requires more time, more development work, and small teams that depend on that abstraction to make coding easier will only see limited performance advantages.

The reasons for this transition to a lower level API is being driven the by widening gap of performance between CPU and GPUs.  NVIDIA provided the images below.

04.jpg

On the left we see performance scaling in terms of GFLOPS and on the right the metric is memory bandwidth. Clearly the performance of NVIDIA's graphics chips has far outpaced (as have AMD’s) what the best Intel desktop processor have been able and that gap means that the industry needs to innovate to find ways to close it.

Continue reading NVIDIA Talks DX12, DX11 Efficiency Improvements!!!

GDC 14: NVIDIA, AMD, and Intel Discuss OpenGL Speed-ups

Subject: General Tech, Shows and Expos | March 22, 2014 - 01:41 AM |
Tagged: opengl, nvidia, Intel, gdc 14, GDC, amd

So, for all the discussion about DirectX 12, the three main desktop GPU vendors, NVIDIA, AMD, and Intel, want to tell OpenGL developers how to tune their applications. Using OpenGL 4.2 and a few cross-vendor extensions, because OpenGL is all about its extensions, a handful of known tricks can reduce driver overhead up to ten-fold and increase performance up to fifteen-fold. The talk is very graphics developer-centric, but it basically describes a series of tricks known to accomplish feats similar to what Mantle and DirectX 12 suggest.

opengl_logo.jpg

The 130-slide presentation is broken into a few sections, each GPU vendor getting a decent chunk of time. On occasion, they would mention which implementation fairs better with one function call. The main point that they wanted to drive home (since they clearly repeated the slide three times with three different fonts) is that none of this requires a new API. Everything exists and can be implemented right now. The real trick is to know how to not poke the graphics library in the wrong way.

The page also hosts a keynote from the recent Steam Dev Days.

That said, an advantage that I expect from DirectX 12 and Mantle is reduced driver complexity. Since the processors have settled into standards, I expect that drivers will not need to do as much unless the library demands it for legacy reasons. I am not sure how extending OpenGL will affect that benefit, as opposed to just isolating the legacy and building on a solid foundation, but I wonder if these extensions could be just as easy to maintain and optimize. Maybe it is.

Either way, the performance figures do not lie.

Source: NVIDIA

Oculus Rift Development Kit 2 (DK2) Are $350, Expected July

Subject: General Tech, Displays, Shows and Expos | March 22, 2014 - 01:04 AM |
Tagged: oculus rift, Oculus, gdc 14, GDC

Last month, we published a news piece stating that Oculus Rift production has been suspended as "certain components" were unavailable. At the time, the company said they are looking for alternate suppliers but do not know how long that will take. The speculation was that the company was simply readying a new version and did not want to cannibalize their sales.

This week, they announced a new version which is available for pre-order and expected to ship in July.

DK2, as it is called, integrates a pair of 960x1080 OLED displays (correction, March 22nd @ 3:15pm: It is technically a single 1080p display that is divided per eye) for higher resolution and lower persistence. Citing Valve's VR research, they claim that the low persistence will reduce motion blur as your eye blends neighboring frames together. In this design, it flickers the image for a short period before going black, and does this at a high enough rate keep your eye fed with light. The higher resolution also prevents the "screen door effect" complained about by the first release. Like their "Crystal Cove" prototype, it also uses an external camera to reduce latency in detecting your movement. All of these should combine to less motion sickness.

I would expect that VR has a long road ahead of it before it becomes a commercial product for the general population, though. There are many legitimate concerns about leaving your users trapped in a sensory deprivation apparatus when Kinect could not even go a couple of days without someone pretending to play volleyball and wrecking their TV with ceiling fan fragments. Still, this company seems to be doing it intelligently: keep afloat on developers and lead users as you work through your prototypes. It is cool, even if it will get significantly better, and people will support its research while getting the best at the time.

DK2 is available for pre-order for $350 and is expected to ship in July.

Source: Oculus