Shedding a little light on Monday's announcement

Most of our readers should have some familiarity with GameWorks, which is a series of libraries and utilities that help game developers (and others) create software. While many hardware and platform vendors provide samples and frameworks, taking the brunt of the work required to solve complex problems, this is NVIDIA's branding for their suite of technologies. Their hope is that it pushes the industry forward, which in turn drives GPU sales as users see the benefits of upgrading.

nvidia-2016-gdc-gameworksmission.png

This release, GameWorks SDK 3.1, contains three complete features and two “beta” ones. We will start with the first three, each of which target a portion of the lighting and shadowing problem. The last two, which we will discuss at the end, are the experimental ones and fall under the blanket of physics and visual effects.

nvidia-2016-gdc-volumetriclighting-fallout.png

The first technology is Volumetric Lighting, which simulates the way light scatters off dust in the atmosphere. Game developers have been approximating this effect for a long time. In fact, I remember a particular section of Resident Evil 4 where you walk down a dim hallway that has light rays spilling in from the windows. Gamecube-era graphics could only do so much, though, and certain camera positions show that the effect was just a translucent, one-sided, decorative plane. It was a cheat that was hand-placed by a clever artist.

nvidia-2016-gdc-volumetriclighting-shaftswireframe.png

GameWorks' Volumetric Lighting goes after the same effect, but with a much different implementation. It looks at the generated shadow maps and, using hardware tessellation, extrudes geometry from the unshadowed portions toward the light. These little bits of geometry sum, depending on how deep the volume is, which translates into the required highlight. Also, since it's hardware tessellated, it probably has a smaller impact on performance because the GPU only needs to store enough information to generate the geometry, not store (and update) the geometry data for all possible light shafts themselves -- and it needs to store those shadow maps anyway.

nvidia-2016-gdc-volumetriclighting-shaftsfinal.png

Even though it seemed like this effect was independent of render method, since it basically just adds geometry to the scene, I asked whether it was locked to deferred rendering methods. NVIDIA said that it should be unrelated, as I suspected, which is good for VR. Forward rendering is easier to anti-alias, which makes the uneven pixel distribution (after lens distortion) appear more smooth.

Read on to see the other four technologies, and a little announcement about source access.

Video Perspective: Spending the day with an Oculus Rift

Subject: Graphics Cards | March 16, 2016 - 02:00 PM |
Tagged: video, rift, Oculus

As part of our second day at GDC, Ken and I spent 4+ hours with Oculus during their "Game Days 2016" event, an opportunity for us to taste test games in 30 minute blocks, getting more hands on time than we ever have before. The event was perfectly organized and easy to work in, and it helps that the product is amazing as well.

oculus-rift-consumer-edition.png

Of the 40-ish games available to play, 30 of them will be available on the Rift launch day, March 28th. We were able to spend some time with the following:

We aren't game reviewers here, but we obviously have a deep interest in games, and thus, having access to these games is awesome. But more than that, access to the best software that VR will have to offer this spring is invaluable as we continue to evaluate hardware accurately for our readers. 

kitchen2.jpg

Job Simulator

Ken and I sat down after the Oculus event to talk about the games we played, the experiences we had and what input the developers had about the technical issues and concerns surrounding VR development.

AMD's Raja Koduri talks moving past CrossFire, smaller GPU dies, HBM2 and more.

Subject: Graphics Cards | March 15, 2016 - 06:02 AM |
Tagged: vulkan, raja koduri, Polaris, HBM2, hbm, dx12, crossfire, amd

After hosting the AMD Capsaicin event at GDC tonight, the SVP and Chief Architect of the Radeon Technologies Group Raja Koduri sat down with me to talk about the event and offered up some additional details on the Radeon Pro Duo, upcoming Polaris GPUs and more. The video below has the full interview but there are several highlights that stand out as noteworthy.

  • Raja claimed that one of the reasons to launch the dual-Fiji card as the Radeon Pro Duo for developers rather than pure Radeon, aimed at gamers, was to “get past CrossFire.” He believes we are at an inflection point with APIs. Where previously you would abstract two GPUs to appear as a single to the game engine, with DX12 and Vulkan the problem is more complex than that as we have seen in testing with early titles like Ashes of the Singularity.

    But with the dual-Fiji product mostly developed and prepared, AMD was able to find a market between the enthusiast and the creator to target, and thus the Radeon Pro branding was born.

    pro_duo_zoom.jpg

    Raja further expands on it, telling me that in order to make multi-GPU useful and productive for the next generation of APIs, getting multi-GPU hardware solutions in the hands of developers is crucial. He admitted that CrossFire in the past has had performance scaling concerns and compatibility issues, and that getting multi-GPU correct from the ground floor here is crucial.
     

  • With changes in Moore’s Law and the realities of process technology and processor construction, multi-GPU is going to be more important for the entire product stack, not just the extreme enthusiast crowd. Why? Because realities are dictating that GPU vendors build smaller, more power efficient GPUs, and to scale performance overall, multi-GPU solutions need to be efficient and plentiful. The “economics of the smaller die” are much better for AMD (and we assume NVIDIA) and by 2017-2019, this is the reality and will be how graphics performance will scale.

    Getting the software ecosystem going now is going to be crucial to ease into that standard.
     

  • The naming scheme of Polaris (10, 11…) has no equation, it’s just “a sequence of numbers” and we should only expect it to increase going forward. The next Polaris chip will be bigger than 11, that’s the secret he gave us.

    There have been concerns that AMD was only going to go for the mainstream gaming market with Polaris but Raja promised me and our readers that we “would be really really pleased.” We expect to see Polaris-based GPUs across the entire performance stack.
     

  • AMD’s primary goal here is to get many millions of gamers VR-ready, though getting the enthusiasts “that last millisecond” is still a goal and it will happen from Radeon.
     
  • No solid date on Polaris parts at all – I tried! (Other than the launches start in June.) Though Raja did promise that after tonight, he will only have his next alcoholic beverage until the launch of Polaris. Serious commitment!
     
  • Curious about the HBM2 inclusion in Vega on the roadmap and what that means for Polaris? Though he didn’t say it outright, it appears that Polaris will be using HBM1, leaving me to wonder about the memory capacity limitations inherent in that. Has AMD found a way to get past the 4GB barrier? We are trying to figure that out for sure.

    roadmap.jpg

    Why is Polaris going to use HBM1? Raja pointed towards the extreme cost and expense of building the HBM ecosystem prepping the pipeline for the new memory technology as the culprit and AMD obviously wants to recoup some of that cost with another generation of GPU usage.

Speaking with Raja is always interesting and the confidence and knowledge he showcases is still what gives me assurance that the Radeon Technologies Group is headed in the correct direction. This is going to be a very interesting year for graphics, PC gaming and for GPU technologies, as showcased throughout the Capsaicin event, and I think everyone should be looking forward do it.

AMD claims 83% VR market share, shows VR on Polaris

Subject: Graphics Cards | March 14, 2016 - 11:00 PM |
Tagged: VR, radeon pro duo, radeon, capsaicin, amd

As part of AMD’s Capsaicin event in San Francisco today, the company is making some bold statements around its strategy for VR, including massive market share dominance, new readiness programs and the future of VR with the Polaris architecture due out this year.

The most surprising statement made by AMD at the event was the claim that “AMD is powering the overwhelming majority of home entertainment VR systems around the world, with an estimated 83 percent market share.” This is obviously not based on discrete GPU sales in the PC market alone, but instead includes the sales of the PlayStation 4 game console, for which Sony will launch its own PlayStation VR headset later this year. (Side note, does JPR not include the array of Samsung phones to be “home entertainment VR” systems?)

maxresdefault.jpg

There is no denying that Sony's install base with the PS4 has put AMD in the driver seat when it comes to global gaming GPU distribution, but as of today this advantage has not amounted to anything noticeable in the PC space – a stance that AMD was selling hard before the consoles’ launch. I am hesitant to put any weight behind AMD’s PS4 integration for VR moving forward, so the company will have to prove that this is in fact an advantage for the chip maker going into 2016.

AMD is talking up other partnerships as well, including those with HTC and Oculus for their respective headset launches, due in the next 30 days. Beyond that, AMD hardware is being used in the just announced Sulon Q wireless VR headset and has deals in place with various healthcare, media and educational outlets to seed development hardware.

side_0.jpg

For system vendors and add-in card builders, AMD is launching a certification program that will create labels of “Radeon™ VR Ready Premium” and “Radeon™ VR Ready Creator”. The former will be assigned to graphics cards at Radeon R9 290 performance and above to indicate they are capable of meeting the specifications required by Oculus and HTC for their VR headsets; the latter is going to be assigned only to the Radeon Pro Duo dual-Fiji graphics card, meant to target developers that need maximum performance.

Finally, AMD is showing that its next generation graphics architecture, Polaris, is capable of VR as well.

AMD today demonstrated for the first time ever the company’s forthcoming Polaris 10 GPU running Valve’s Aperture Science Robot Repair demo powered by the HTC Vive Pre. The sample GPU features the recently announced Polaris GPU architecture designed for 14nm FinFET, optimized for DirectX® 12 and VR, and boasts significant architectural improvements over previous AMD architectures including HDR monitor support, industry-leading performance-per-watt 2, and AMD’s 4th generation Graphics Core Next (GCN) architecture.

We are still waiting to see if this is the same silicon that AMD showed at CES, a mainstream part, or if we might be witnessing the first demo of a higher end part, wetting the appetite of the enthusiast community.

AMD Named Exclusive GPU Provider for Crytek VR First

Subject: Graphics Cards | March 14, 2016 - 11:00 PM |
Tagged: crytek, CRYENGINE, amd

AMD will be the sole GPU presence in the labs at universities participating in Crytek’s VR First initiative, which “provides colleges and universities a ready-made VR solution for developers, students and researchers”, according to AMD.

amd_crytek.jpg

AMD is leveraging the newly-announced Radeon Pro Duo graphics cards for this partnership, which lends immediate credibility to their positioning of the new GPU for VR development.

“The new labs will be equipped with AMD Radeon™ Pro Duo graphics cards with LiquidVR™ SDK, the world’s fastest VR content creator platform bridging content creation and consumption and offering an astonishing 16 teraflops of compute power. Designed to be compatible with multiple head mounted displays, including the Oculus Rift™ and HTC Vive™, AMD Radeon™ Pro Duo cards will encourage grassroots VR development around the world. The initial VR First Lab at Bahçeşehir University in Istanbul is already up and running in January of this year.”

PRODUO.png

Crytek CEO Cevat Yerli explains VR First:

“VR First labs will become key incubators for nurturing new talent in VR development and creating a global community well-prepared to innovate in this exciting and emerging field. VR experiences, harnessing the power of the CRYENGINE and developed using world-class Radeon™ hardware and software, will have the potential to fundamentally transform how we interact with technology.”

This certainly appears to be an early win for AMD in VR development, at least in the higher education sector.

Source: AMD

AMD launches dual-Fiji card as Radeon Pro Duo, targeting VR developers, for $1500

Subject: Graphics Cards | March 14, 2016 - 11:00 PM |
Tagged: VR, radeon pro duo, radeon, Fiji, dual fiji, capsaicin, amd

It’s finally here, and AMD is ready to ship it, the much discussed and often debated dual-Fiji graphics card that the company first showed with the launch of the Fury series of Radeon cards way back in June of last year. It was unnamed then, and I started calling it the AMD Fury X2, but it seems that AMD has other plans for this massive compute powerhouse, now with a price tag of $1,499.

Radeon_Pro_Duo_01.jpg

As part of the company’s Capsaicin event at GDC tonight, AMD showed the AMD Radeon Pro Duo, calling it the “most powerful platform for VR” among other things. The card itself is a dual-slot configuration with what appears to be a (very thick) 120mm self-contained liquid cooler, similar to the Fury X design. You’ll need three 8-pin power connectors for the Radeon Pro Duo as well, but assuming you are investing in this kind of hardware that should be no issue.

Even with the integration of HBM to help minimize the footprint of the GPU and memory system, the Radeon Pro Duo is a bit taller than the standard bracket and is more analogous the length of a standard graphics card.

Radeon_Pro_Duo_02.jpg

AMD isn’t telling us much about performance in the early data provided, only mentioning again that the card provides 16 teraflops of compute performance. This is just about double that of the Fury X, single GPU variant released last year; clearly the benefit of water cooling the Pro Duo is that it can run at maximum clock speeds.

Probably the biggest change from what we learned about the dual-GPU card in June to today is its target market. AMD claims that the Radeon Pro Duo is “aimed at all aspects of the VR developer lifestyle: developing content more rapidly for tomorrow’s killer VR experiences while at work, and playing the latest DirectX® 12 experiences at maximum fidelity while off work.” Of course you can use this card for gaming – it will show up in your system just as any dual-GPU configuration would and can be taken advantage of at the same level owning two Fury X cards would.

Radeon_Pro_Duo_03.jpg

The Radeon Pro Duo is cooled by a rear-mounted liquid cooler in this photo

That being said, with a price tag of $1,499, it makes very little sense for gamers to invest in this product for gaming alone. Just as we have said about the NVIDIA TITAN line of products, they are the best of the best but are priced to attract developers rather than gamers. In the past AMD had ridiculed NVIDIA for this kind of move but it seems that the math just works here – the dual-Fiji card is likely a high cost, low yield, low production part. Add to that the fact that it was originally promised in Q3 2015, and that AMD has publicly stated that its Polaris-based GPUs would be ready starting in June, and the window for a consumer variant of the Radeon Pro Duo is likely closed.

"The Radeon Pro Duo is AMD's evolution of their high-end graphics card strategy with them positioning the Radeon Pro Duo more towards a content creator audience rather than gamers. This helps justify the higher price and lower volumes as well as gives developers the frame of mind to develop for multi-GPU VR from the get go rather than as an afterthought." - Anshel Sag, Analyst at Moor Insights & Strategy

For engineers, developers, educational outlets and other professional landscapes though, the pure processing power compressed into a single board will be incredibly by useful. And of course, for those gamers crazy enough out there with the unlimited budget and the need to go against our recommendations.


Update: AMD's Capsaicin livestream included some interesting slides on the new Pro Duo GPU, including some of the capabilities and a look at the cooling system.

PRO_DUO_GPU.png

The industrial design has carried over from the Fury X

As we see from this slide, the Pro Duo offers 2x Fiji GPUs with 8GB of HBM, and boasts 16 TFLOPS of compute power (the AMD Nano offers 8.19 TFLOPS, so this is consistent with a dual-Nano setup).

PRO_DUO_GPU_BLOCK.png

The cooling system is again a Cooler Master design, with a separate block for each GPU. The hoses have a nice braided cover, and lead to a very thick looking radiator with a pre-attached fan.

PRO_DUO_COOLER.png

From the look of the fan blades this looks like it's designed to move quite a bit of air, and it will need to considering a single (120 mm?) radiator is handling cooling for a pair of high-end GPUs. Temperatures and noise levels will be something to look for when we have hardware in hand.

AMD Live Stream and Live Blog TODAY!!

Subject: Graphics Cards | March 14, 2016 - 08:41 PM |
Tagged: video, live, capsaicin, amd

AMD is hosting an event during the Games Developer Conference called Capsaicin, focused on VR, the new Polaris architecture, and will be announcing some new products we can't discuss quite yet. (*wink*) On our PC Perspective Live! page we are hosting AMD's live stream and will be adding our commentary with a live blog. Won't you join us?
 
AMD Capsaicin.jpg
 
 
The event starts at 4pm PT / 7pm ET
 
 
upload1.jpg
 
 Live from the event as the team puts the final touches on tonight’s big show by AMD Radeon.
 
upload2.jpg
 
 

Rumor: NVIDIA's Next GPU Called GTX 1080, Uses GDDR5X

Subject: Graphics Cards | March 11, 2016 - 10:03 PM |
Tagged: rumor, report, pascal, nvidia, HBM2, gtx1080, GTX 1080, gtx, GP104, geforce, gddr5x

We are expecting news of the next NVIDIA graphics card this spring, and as usual whenever an announcement is imminent we have started seeing some rumors about the next GeForce card.

GeForce-GTX.jpg

(Image credit: NVIDIA)

Pascal is the name we've all being hearing about, and along with this next-gen core we've been expecting HBM2 (second-gen High Bandwidth Memory). This makes today's rumor all the more interesting, as VideoCardz is reporting (via BenchLife) that a card called either the GTX 1080 or GTX 1800 will be announced, using the GP104 GPU core with 8GB of GDDR5X - and not HBM2.

The report also claims that NVIDIA CEO Jen-Hsun Huang will have an announcement for Pascal in April, which leads us to believe a shipping product based on Pascal is finally in the works. Taking in all of the information from the BenchLife report, VideoCardz has created this list to summarize the rumors (taken directly from the source link):

  • Pascal launch in April
  • GTX 1080/1800 launch in May 27th
  • GTX 1080/1800 has GP104 Pascal GPU
  • GTX 1080/1800 has 8GB GDDR5X memory
  • GTX 1080/1800 has one 8pin power connector
  • GTX 1080/1800 has 1x DVI, 1x HDMI, 2x DisplayPort
  • First Pascal board with HBM would be GP100 (Big Pascal)

VideoCardz_Chart.png

Rumored GTX 1080 Specs (Credit: VideoCardz)

The alleged single 8-pin power connector with this GTX 1080 would place the power limit at 225W, though it could very well require less power. The GTX 980 is only a 165W part, with the GTX 980 Ti rated at 250W.

As always, only time will tell how accurate these rumors are; though VideoCardz points out "BenchLife stories are usually correct", though they are skeptical of the report based on the name GTX 1080 (though this would follow the current naming scheme of GeForce cards).

Source: VideoCardz

AMD Announces XConnect Technology for External Graphics

Subject: Graphics Cards | March 10, 2016 - 06:27 PM |
Tagged: XConnect, thunderbolt 3, radeon, graphics card, gpu, gaming laptop, external gpu, amd

AMD has announced their new external GPU technology called XConnect, which leverages support from the latest Radeon driver to support AMD graphics over Thunderbolt 3.

AMD_SCREEN.png

The technology showcased by AMD is powered by Razer, who partnered with AMD to come up with an expandable solution that supports up to 375W GPUs, including R9 Fury, R9 Nano, and all R9 300 series GPUs up to the R9 390X (there is no liquid cooling support, and the R9 Fury X isn't listed as being compatible). The notebook in AMD's marketing material is the Razer Blade Stealth, which offers the Razer Core external GPU enclosure as an optional accessory. (More information about these products from Razer here.) XConnect is not tied to any vendor, however; this is "generic driver" support for GPUs over Thunderbolt 3.

AMD has posted this video with the head of Global Technical Marketing, Robert Hallock, to explain the new tech and show off the Razer hardware:

The exciting part has to be the promise of an industry standard for external graphics, something many have hoped for. Not everyone will produce a product exactly like Razer has, since there is no requirement to provide a future upgrade path in a larger enclosure like this, but the important thing is that Thunderbolt 3 support is built in to the newest Radeon Crimson drivers.

Here are the system requirements for AMD XConnect from AMD:

  • ​Radeon Software 16.2.2 driver (or later)
  • 1x Thunderbolt 3 port
  • 40Gbps Thunderbolt 3 cable
  • Windows 10 build 10586 (or later)
  • BIOS support for external graphics over Thunderbolt 3 (check with system vendor for details)
  • Certified Thunderbolt 3 graphics enclosure configured with supported Radeon R9 Series GPU
  • Thunderbolt firmware (NVM) v.16

AMD_SLIDE.png

The announcement introduces all sorts of possibilities. How awesome would it be to see a tiny solution with an R9 Nano powered by, say, an SFX power supply? Or what about a dual-GPU enclosure (possibly requiring 2 Thunderbolt 3 connections?), or an enclosure supporting liquid cooling (and the R9 Fury X)? The potential is certainly there, and with a standard in place we could see some really interesting products in the near future (or even DIY solutions). It's a promising time for mobile gaming!

Source: AMD
Author:
Manufacturer: GitHub

A start to proper testing

During all the commotion last week surrounding the release of a new Ashes of the Singularity DX12 benchmark, Microsoft's launching of the Gears of War Ultimate Edition on the Windows Store and the company's supposed desire to merge Xbox and PC gaming, a constant source of insight for me was one Andrew Lauritzen. Andrew is a graphics guru at Intel and has extensive knowledge of DirectX, rendering, engines, etc. and has always been willing to teach and educate me on areas that crop up. The entire DirectX 12 and Unified Windows Platform was definitely one such instance. 

Yesterday morning Andrew pointed me to a GitHub release for a tool called PresentMon, a small sample of code written by a colleague of Andrew's that might be the beginnings of being able to properly monitor performance of DX12 games and even UWP games.

The idea is simple and it's implementation even more simple: PresentMon monitors the Windows event tracing stack for present commands and records data about them to a CSV file. Anyone familiar with the kind of ETW data you can gather will appreciate that PresentMon culls out nearly all of the headache of data gathering by simplifying the results into application name/ID, Present call deltas and a bit more.

gears.jpg

Gears of War Ultimate Edition - the debated UWP version

The "Present" method in Windows is what produces a frame and shows it to the user. PresentMon looks at the Windows events running through the system, takes note of when those present commands are received by the OS for any given application, and records the time between them. Because this tool runs at the OS level, it can capture Present data from all kinds of APIs including DX12, DX11, OpenGL, Vulkan and more. It does have limitations though - it is read only so producing an overlay on the game/application being tested isn't possible today. (Or maybe ever in the case of UWP games.) 

What PresentMon offers us at this stage is an early look at a Fraps-like performance monitoring tool. In the same way that Fraps was looking for Present commands from Windows and recording them, PresentMon does the same thing, at a very similar point in the rendering pipeline as well. What is important and unique about PresentMon is that it is API independent and useful for all types of games and programs.

presentmonscreen.png

PresentMon at work

The first and obvious question for our readers is how this performance monitoring tool compares with Frame Rating, our FCAT-based capture benchmarking platform we have used on GPUs and CPUs for years now. To be honest, it's not the same and should not be considered an analog to it. Frame Rating and capture-based testing looks for smoothness, dropped frames and performance at the display, while Fraps and PresentMon look at performance closer to the OS level, before the graphics driver really gets the final say in things. I am still targeting for universal DX12 Frame Rating testing with exclusive full screen capable applications and expect that to be ready sooner rather than later. However, what PresentMon does give us is at least an early universal look at DX12 performance including games that are locked behind the Windows Store rules.

Continue reading our look at the new PresentMon tool!!