Is the GPU in Intel Kaby Lake-G More Polaris than Vega?

Subject: Graphics Cards, Processors | April 9, 2018 - 04:25 PM |
Tagged: Vega, Polaris, kaby lake-g, Intel, amd

Over the weekend, some interesting information has surfaced surrounding the new Kaby Lake-G hardware from Intel. A product that is officially called the “8th Generation Intel Core Processors with Radeon RX Vega M Graphics” is now looking like it might be more of a Polaris-based GPU than a Vega-based one. This creates an interesting marketing and technology capability discussion for the community, and both Intel and AMD, that is worth diving into.

PCWorld first posted the question this weekend, using some interesting data points as backup that Kaby Lake-G may in fact be based on Polaris. In Gordon’s story he notes that in AIDA64 the GPU is identified as “Polaris 22” while the Raven Ridge-based APUs from AMD show up as “Raven Ridge.” Obviously the device identification of a third party piece of software is a suspect credential in any situation, but the second point provided is more salient: based on the DXDiag information, the GPU on the Hades Canyon NUC powered by Kaby Lake-G does not support DirectX 12.1.

dx_diag_comparo-100754201-orig.jpg

Image source: PCWorld

AMD clearly stated in its launch of the Vega architecture last year that the new GPUs supported DX 12.1, among other features. The fact that the KBL-G part does NOT include support for it is compelling evidence that the GPU might be more similar to Polaris than Vega.

Tom’s Hardware did some more digging that was posted this morning, using a SiSoft Sandra test that can measure performance of FP16 math and FP32. For both the Radeon RX Vega 64 and 56 discrete graphics cards, running the test with FP16 math results in a score that is 65% faster than the FP32 results. With a Polaris-based graphics card, an RX 470, the scores between FP32 and FP16 were identical as the architecture can support FP16 math functions but doesn’t accelerate it with AMD’s “rapid packed math” feature (that was a part of the Vega launch).

tomsmath.jpg

Image source: Tom's Hardware

And you guessed it, the Kaby Lake-G part only runs essentially even in the FP16 mode. (Also note that AMD’s Raven Ridge APU that integrated Vega graphics does get accelerated by 61% using FP16.)

What Kaby Lake-G does have that leans toward Vega is support for HBM2 memory (which none of the Polaris cards have) and “high bandwidth memory cache controller and enhanced compute units with additional ROPs” according to the statement from Intel given to Tom’s Hardware.

It should be noted that just because the benchmarks and games that can support rapid packed math don’t take advantage of that capability today, does not mean they won’t have the capability to do so after a driver or firmware update. That being said, if that’s the plan, and even if it’s not, Intel should come out and tell the consumers and media.

The debate and accusations of conspiracy are running rampant again today with this news. Is Intel trying to pull one over on us by telling the community that this is a Vega-based product when it is in fact based on Polaris? Why would AMD allow and promote the Vega branding with a part that it knows didn’t meet the standards it created to be called a Vega architecture solution?

Another interesting thought comes when analyzing this debate with the Ryzen 7 2400G and Ryzen 5 2200G products, both of which claim to use Vega GPUs as a portion of the APU. However, without support for HBM2 or the high-bandwidth cache controller, does that somehow shortchange the branding for it? Or are the memory features of the GPU considered secondary to its design?

This is the very reason why companies hate labels, hate specifications, and hate having all of this tracked by a competent and technical media. Basically every company in the tech industry is guilty of this practice: Intel has 2-3 architectures running as “8th Generation” in the market, AMD is selling RX 500 cards that were once RX 400 cards, and NVIDIA has changed performance capabilities of the MX 150 at least once or twice.

The nature of semi-custom chips designs is that they are custom. Are the GPUs used in the PS4 and Xbox One or Xbox One X called Polaris, Vega, or something else? It would be safer for AMD and its partners to give each new product its own name, its own brand—but then the enthusiasts would want to know what it was most like, and how did it compare to Polaris, or Vega, etc.? It’s also possible that AMD was only willing to sell this product to Intel if it included some of these feature restrictions. In complicated negotiations like this one surely was, anything is feasible.

These are tough choices for companies to make. AMD loves having the Vega branding in more products as it gives weight to the development cost and time it spent on the design. Having Vega associated with more high-end consumer products, including those sold by Intel, give them leverage for other products down the road. From Intel’s vantage point using the Vega brand makes it looks like it has the very latest technology in its new processor and it can benefit from any cross-promotion that occurs around the Vega brand from AMD or its partners.

Unfortunately, it means that the devil is in the details, and the details are something that no one appears to be willing to share. Does it change the performance we saw in our recent Hades Canyon NUC review or our perspective on it as a product? It does not. But as features like Rapid Packed Math or the new geometry shader accelerate in adoption, the capability for Kaby Lake-G to utilize them is going to be scrutinized more heavily.

Source: Various

Considering picking up a vintage GPU?

Subject: Graphics Cards | April 2, 2018 - 02:36 PM |
Tagged: cryptocurrency, graphics cards

It has been a while since the Hardware Leaderboard has been updated as it is incredibly depressing to try to price out a new GPU, for obvious reasons.  TechSpot have taken an interesting approach to dealing with the crypto-blues, they have just benchmarked 44 older GPUs on current games to see how well they fare.  The cards range from the GTX 560 and HD7770 through to current model cards which are available to purchase used from sites such as eBay.  Buying a used card brings the price down to somewhat reasonable levels, though you do run the risk of getting a dead or dying card.  With interesting metrics such as price per frame, this is a great resource if you find yourself in desperate need of a GPU in the current market.  Check it out here.

2018-03-21-image.jpg

"Along with our recent editorials on why it's a bad time to build a gaming PC, we've been revisiting some older GPUs to see how they hold up in today's games. But how do you know how much you should be paying for a secondhand graphics card?"

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: TechSpot

Eight-GPU SLI in Unreal Engine 4 (Yes There Is a Catch)

Subject: Graphics Cards | March 29, 2018 - 09:52 PM |
Tagged: nvidia, GTC, gp102, quadro p6000

At GTC 2018, Walt Disney Imagineering unveiled a work-in-progress clip of their upcoming Star Wars: Galaxy’s Edge attraction, which is expected to launch next year at Disneyland and Walt Disney World Resort. The cool part about this ride is that it will be using Unreal Engine 4 with eight, GP102-based Quadro P6000 graphics cards. NVIDIA also reports that Disney has donated the code back to Epic Games to help them with their multi-GPU scaling in general – a win for us consumers… in a more limited fashion.

nvidia-2018-GTC-starwars-8-way-sli.jpg

See? SLI doesn’t need to be limited to two cards if you have a market cap of $100 billion USD.

Another interesting angle to this story is how typical PC components are contributing to these large experiences. Sure, Quadro hardware isn’t exactly cheap, but it can be purchased through typical retail channels and it allows the company to focus their engineering time elsewhere.

Ironically, this also comes about two decades after location-based entertainment started to decline… but, you know, it’s Disneyland and Disney World. They’re fine.

Source: NVIDIA

ASRock Enters Graphics Card Market With Phantom Gaming Series of AMD GPUs

Subject: Graphics Cards | March 29, 2018 - 05:45 PM |
Tagged: RX 580, RX 570, RX 560, RX 550, Polaris, mining, asrock, amd

ASRock, a company known mostly for its motherboards that was formerly an Asus sub-brand but is now an independent company owned by Pegatron since 2010 is now getting into the graphics card market with a new Phantom Gaming series. At launch, the Phantom Gaming series is comprised of four AMD Polaris-based graphics cards including the Phantom Gaming RX 550 2G and RX 560 2G on the low end and the Phantom Gaming X RX 570 8G OC and RX 580 8G OC on the mid/high end range.

Phantom Gaming X Radeon RX580 8G OC(L4).png

ASRock is using black shrouds with white accents and silver and red logos. The lower end Phantom Gaming cards utilize a single dual ball bearing fan while the Phantom Gaming X cards use a dual fan configuration. ASRock is using copper baseplates paired with aluminum heatsinks and composite heatpipes. The Phantom Gaming RX 550 and RX 560 cards use only PCI-E slot power while the Phantom Gaming X RX 570 and RX 580 cards get power from both the slot and a single 8-pin PCI-E power connector.

Video outputs include one HDMI 2.0, one DisplayPort 1.4, and one DL-DVI-D on the Phantom Gaming parts and one HDMI 2.0, three DisplayPort 1.4, and one DL-DVI-D on the higher-end Phantom Gaming X graphics cards. All of the graphics card models feature both silent and overclocked modes in addition to their out-of-the-box default clocks depending on whether you value performance or noise. Users can select which mode they want or perform a custom overclock or fan curve using ASRock's Phantom Gaming Tweak utility.

On the performance front, out of the box ASRock is slightly overclocking the Phantom Gaming X OC cards (the RX 570 and RX 580 based ones) and slightly underclocking the lower end Phantom Gaming cards (including the memory which is downclocked to 6 GHz) compared to their AMD reference specifications.

  ASRock RX 580 OC RX 580 ASRock RX 570 OC RX 570 ASRock RX 560 RX 560 ASRock RX 550 RX 550
Cores 2304 2304 2048 2048 896 896 512 512
GPU Clock (MHz) 1380 1340 1280 1244 1149 1275 1100 1183
GPU Clock OC Mode (MHz) 1435 - 1331 - 1194 - 1144 -
Memory (GDDR5) 8GB 8GB 8GB 8GB 2GB 2GB/4GB 2GB 2GB/4GB
Memory Clock (GHz) 8GHz 8GHz 7GHz 7GHz 6GHz 7GHz 6GHz 7GHz
Memory Clock OC Mode (MHz) 8320 - 7280 - 6240 - 6240 -
Texture Units 144 144 128 128 64 64 32 32
ROPs 32 32 32 32 16 16 16 16

The table above shows the comparisons between the ASRock graphics cards and their AMD reference card counterparts. Note that the Phantom Gaming RX 560 2G is based on the cut-down 14 CU (compute unit) model rather than the launch 16 CU GPU. Also, even in OC Mode, ASRock does not bring the memory up to the 7 GT/s reference spec. On the positive side, turning on OC mode does give a decent factory overclock of the GPU over reference. Also nice to see is that on the higher end "OC Certified" Phantom Gaming X cards, ASRock overclocks both the GPU and memory speeds which is often not the case with factory overclocks.

Phantom Gaming Radeon RX550 2G(L1).png

ASRock did not detail pricing with any of the launch announcement cards, but they should be coming soon with 4GB models of the RX 560 an RX 550 to follow later this year.

It is always nice to have more competition in this space and hopefully a new AIB partner for AMD helps alleviate shortages and demand for gaming cards if only by a bit. I am curious how well the cards will perform as while they look good on paper the company is new to graphics cards and the build quality really needs to be there. I am just hoping that the Phantom Gaming moniker is not an allusion to how hard these cards are going to be to find for gaming! (heh) If the rumored Ethereum ASICs do not kill the demand for AMD GPUs I do expect that ASRock will also be releasing mining specific cards as well at some point.

What are your thoughts on the news of ASRock moving into graphics cards?

Also read:

Source: Tech Report
Author:
Manufacturer: Intel

System Overview

Announced at Intel's Developer Forum in 2012, and launched later that year, the Next Unit of Computing (NUC) project was initially a bit confusing to the enthusiast PC press. In a market that appeared to be discarding traditional desktops in favor of notebooks, it seemed a bit odd to launch a product that still depended on a monitor, mouse, and keyboard, yet didn't provide any more computing power.

Despite this criticism, the NUC lineup has rapidly expanded over the years, seeing success in areas such as digital signage and enterprise environments. However, the enthusiast PC market has mostly eluded the lure of the NUC.

Intel's Skylake-based Skull Canyon NUC was the company's first attempt to cater to the enthusiast market, with a slight stray from the traditional 4-in x 4-in form factor and the adoption of their best-ever integrated graphics solution in the Iris Pro. Additionally, the ability to connect external GPUs via Thunderbolt 3 meant Skull Canyon offered more of a focus on high-end PC graphics. 

However, Skull Canyon mostly fell on deaf ears among hardcore PC users, and it seemed that Intel lacked the proper solution to make a "gaming-focused" NUC device—until now.

8th Gen Intel Core processor.jpg

Announced at CES 2018, the lengthily named 8th Gen Intel® Core™ processors With Radeon™ RX Vega M Graphics (henceforth referred to as the code name, Kaby Lake-G) marks a new direction for Intel. By partnering with one of the leaders in high-end PC graphics, AMD, Intel can now pair their processors with graphics capable of playing modern games at high resolutions and frame rates.

DSC04773.JPG

The first product to launch using the new Kaby Lake-G family of processors is Intel's own NUC, the NUC8i7HVK (Hades Canyon). Will the marriage of Intel and AMD finally provide a NUC capable of at least moderate gaming? Let's dig a bit deeper and find out.

Click here to continue reading our review of the Intel Hades Canyon NUC!

GDC 2018: NVIDIA Adds new Ansel and Highlights features to GeForce Experience

Subject: Graphics Cards | March 21, 2018 - 09:37 PM |
Tagged: GDC, GDC 2018, nvidia, geforce experience, ansel, nvidia highlights, call of duty wwii, fortnite, pubg, tekken 7

Building upon the momentum of being included in the two most popular PC games in the world, PlayerUnknown's Battlegrounds and Fortnite, NVIDIA Highlights (previously known as ShadowPlay Highlights) is expanding to even more titles. Support for Call of Duty: WWII and Tekken 7 are available now, with Dying Light: Bad Blood and Escape from Tarkov coming soon.

For those unfamiliar with NVIDIA Highlights, it’s a feature that when integrated into a game, allows for the triggering of automatic screen recording when specific events happen. For example, think of the kill cam in Call of Duty. When enabled, Highlights will save a recording whenever the kill cam is triggered, allowing you to share exciting gameplay moments without having to think about it.

Animated GIF support has also been added to NVIDIA Highlights, allowing users to share shorter clips to platforms including Facebook, Google Photos, or Weibo.

In addition to supporting more games and formats, NVIDIA has also released the NVIDIA Highlights SDK, as well as plugins for Unreal Engine and Unity platforms. Previously, NVIDIA was working with developers to integrate Highlights into their games, but now developers will have the ability to add the support themselves.

Hopefully, these changes mean a quicker influx more titles with Highlights support, compared to the 16 currently supported titles.

In addition to enhancements in Highlights, NVIDIA has also launched a new sharing site for screen captures performed with the Ansel in-game photography tool.

The new ShotWithGeforce.com lets users upload and share their captures from any Ansel supported game.

shotwithgeforce.PNG

Screenshots uploaded to Shot With GeForce are tagged with the specific game the capture is from, making it easy for users to scroll through all of the uploaded captures from a given title.

Source: NVIDIA
Manufacturer: Microsoft

O Rayly? Ya Rayly. No Ray!

Microsoft has just announced a raytracing extension to DirectX 12, called DirectX Raytracing (DXR), at the 2018 Game Developer's Conference in San Francisco.

microsoft-2015-directx12-logo.jpg

The goal is not to completely replace rasterization… at least not yet. This effect will be mostly implemented for effects that require supplementary datasets, such as reflections, ambient occlusion, and refraction. Rasterization, the typical way that 3D geometry gets drawn on a 2D display, converts triangle coordinates into screen coordinates, and then a point-in-triangle test runs across every sample. This will likely occur once per AA sample (minus pixels that the triangle can’t possibly cover -- such as a pixel outside of the triangle's bounding box -- but that's just optimization).

microsoft-2018-gdc-directx12raytracing-rasterization.png

For rasterization, each triangle is laid on a 2D grid corresponding to the draw surface.
If any sample is in the triangle, the pixel shader is run.
This example shows the rotated grid MSAA case.

A program, called a pixel shader, is then run with some set of data that the GPU could gather on every valid pixel in the triangle. This set of data typically includes things like world coordinate, screen coordinate, texture coordinates, nearby vertices, and so forth. This lacks a lot of information, especially things that are not visible to the camera. The application is free to provide other sources of data for the shader to crawl… but what?

  • Cubemaps are useful for reflections, but they don’t necessarily match the scene.
  • Voxels are useful for lighting, as seen with NVIDIA’s VXGI and VXAO.

This is where DirectX Raytracing comes in. There’s quite a few components to it, but it’s basically a new pipeline that handles how rays are cast into the environment. After being queued, it starts out with a ray-generation stage, and then, depending on what happens to the ray in the scene, there are close-hit, any-hit, and miss shaders. Ray generation allows the developer to set up how the rays are cast, where they call an HLSL instrinsic instruction, TraceRay (which is a clever way of invoking them, by the way). This function takes an origin and a direction, so you can choose to, for example, cast rays only in the direction of lights if your algorithm was to, for instance, approximate partially occluded soft shadows from a non-point light. (There are better algorithms to do that, but it's just the first example that came off the top of my head.) The close-hit, any-hit, and miss shaders occur at the point where the traced ray ends.

To connect this with current technology, imagine that ray-generation is like a vertex shader in rasterization, where it sets up the triangle to be rasterized, leading to pixel shaders being called.

microsoft-2018-gdc-directx12raytracing-multibounce.png

Even more interesting – the close-hit, any-hit, and miss shaders can call TraceRay themselves, which is used for multi-bounce and other recursive algorithms (see: figure above). The obvious use case might be reflections, which is the headline of the GDC talk, but they want it to be as general as possible, aligning with the evolution of GPUs. Looking at NVIDIA’s VXAO implementation, it also seems like a natural fit for a raytracing algorithm.

Speaking of data structures, Microsoft also detailed what they call the acceleration structure. Each object is composed of two levels. The top level contains per-object metadata, like its transformation and whatever else data that the developer wants to add to it. The bottom level contains the geometry. The briefing states, “essentially vertex and index buffers” so we asked for clarification. DXR requires that triangle geometry be specified as vertex positions in either 32-bit float3 or 16-bit float3 values. There is also a stride property, so developers can tweak data alignment and use their rasterization vertex buffer, as long as it's HLSL float3, either 16-bit or 32-bit.

As for the tools to develop this in…

microsoft-2018-gdc-PIX.png

Microsoft announced PIX back in January 2017. This is a debugging and performance analyzer for 64-bit, DirectX 12 applications. Microsoft will upgrade it to support DXR as soon as the API is released (specifically, “Day 1”). This includes the API calls, the raytracing pipeline resources, the acceleration structure, and so forth. As usual, you can expect Microsoft to support their APIs with quite decent – not perfect, but decent – documentation and tools. They do it well, and they want to make sure it’s available when the API is.

ea-2018-SEED screenshot (002).png

Example of DXR via EA's in-development SEED engine.

In short, raytracing is here, but it’s not taking over rasterization. It doesn’t need to. Microsoft is just giving game developers another, standardized mechanism to gather supplementary data for their games. Several game engines have already announced support for this technology, including the usual suspects of anything top-tier game technology:

  • Frostbite (EA/DICE)
  • SEED (EA)
  • 3DMark (Futuremark)
  • Unreal Engine 4 (Epic Games)
  • Unity Engine (Unity Technologies)

They also said, “and several others we can’t disclose yet”, so this list is not even complete. But, yeah, if you have Frostbite, Unreal Engine, and Unity, then you have a sizeable market as it is. There is always a question about how much each of these engines will support the technology. Currently, raytracing is not portable outside of DirectX 12, because it’s literally being announced today, and each of these engines intend to support more than just Windows 10 and Xbox.

Still, we finally have a standard for raytracing, which should drive vendors to optimize in a specific direction. From there, it's just a matter of someone taking the risk to actually use the technology for a cool work of art.

If you want to read more, check out Ryan's post about the also-announced RTX, NVIDIA's raytracing technology.

Manufacturer: Microsoft

It's all fun and games until something something AI.

Microsoft announced the Windows Machine Learning (WinML) API about two weeks ago, but they did so in a sort-of abstract context. This week, alongside the 2018 Game Developers Conference, they are grounding it in a practical application: video games!

microsoft-2018-winml-graphic.png

Specifically, the API provides the mechanisms for game developers to run inference on the target machine. The training data that it runs against would be in the Open Neural Network Exchange (ONNX) format from Microsoft, Facebook, and Amazon. Like the initial announcement suggests, it can be used for any application, not just games, but… you know. If you want to get a technology off the ground, and it requires a high-end GPU, then video game enthusiasts are good lead users. When run in a DirectX application, WinML kernels are queued on the DirectX 12 compute queue.

We’ve discussed the concept before. When you’re rendering a video game, simulating an accurate scenario isn’t your goal – the goal is to look like you are. The direct way of looking like you’re doing something is to do it. The problem is that some effects are too slow (or, sometimes, too complicated) to correctly simulate. In these cases, it might be viable to make a deep-learning AI hallucinate a convincing result, even though no actual simulation took place.

Fluid dynamics, global illumination, and up-scaling are three examples.

Previously mentioned SIGGRAPH demo of fluid simulation without fluid simulation...
... just a trained AI hallucinating a scene based on input parameters.

Another place where AI could be useful is… well… AI. One way of making AI is to give it some set of data from the game environment, often including information that a player in its position would not be able to know, and having it run against a branching logic tree. Deep learning, on the other hand, can train itself on billions of examples of good and bad play, and make results based on input parameters. While the two methods do not sound that different, the difference between logic being designed (vs logic being assembled from an abstract good/bad dataset) someone abstracts the potential for assumptions and programmer error. Of course, it abstracts that potential for error into the training dataset, but that’s a whole other discussion.

The third area that AI could be useful is when you’re creating the game itself.

There’s a lot of grunt and grind work when developing a video game. Licensing prefab solutions (or commissioning someone to do a one-off asset for you) helps ease this burden, but that gets expensive in terms of both time and money. If some of those assets could be created by giving parameters to a deep-learning AI, then those are assets that you would not need to make, allowing you to focus on other assets and how they all fit together.

These are three of the use cases that Microsoft is aiming WinML at.

nvidia-2018-deeplearningcarupscale.png

Sure, these are smooth curves of large details, but the antialiasing pattern looks almost perfect.

For instance, Microsoft is pointing to an NVIDIA demo where they up-sample a photo of a car, once with bilinear filtering and once with a machine learning algorithm (although not WinML-based). The bilinear algorithm behaves exactly as someone who has used Photoshop would expect. The machine learning algorithm, however, was able to identify the objects that the image intended to represent, and it drew the edges that it thought made sense.

microsoft-2018-gdc-PIX.png

Like their DirectX Raytracing (DXR) announcement, Microsoft plans to have PIX support WinML “on Day 1”. As for partners? They are currently working with Unity Technologies to provide WinML support in Unity’s ML-Agents plug-in. That’s all the game industry partners they have announced at the moment, though. It’ll be interesting to see who jumps in and who doesn’t over the next couple of years.

NVIDIA RTX Technology Accelerates Ray Tracing for Microsoft DirectX Raytracing API

Subject: Graphics Cards | March 19, 2018 - 01:00 PM |
Tagged: rtx, nvidia, dxr

The big news from the Game Developers Conference this week was Microsoft’s reveal of its work on a new ray tracing API for DirectX called DirectX Raytracing. As the name would imply, this is a new initiative to bring the image quality improvements of ray tracing to consumer hardware with the push of Microsoft’s DX team. Scott already has a great write up on that news and current and future implications of what it will mean for PC gamers, so I highly encourage you all to read that over before diving more into this NVIDIA-specific news.

Ray tracing has been the holy grail of real-time rendering. It is the gap between movies and games – though ray tracing continues to improve in performance it takes the power of offline server farms to render the images for your favorite flicks. Modern game engines continue to use rasterization, an efficient method for rendering graphics but one that depends on tricks and illusions to recreate the intended image. Ray tracing inherently solves the problems that rasterization works around including shadows, transparency, refraction, and reflection. But it does so at a prohibitive performance cost. But that will be changing with Microsoft’s enablement of ray tracing through a common API and technology like what NVIDIA has built to accelerate it.

04.jpg

Alongside support and verbal commitment to DXR, NVIDIA is announcing RTX Technology. This is a combination of hardware and software advances to improve the performance of ray tracing algorithms on its hardware and it works hand in hand with DXR. NVIDIA believes this is the culmination of 10 years of development on ray tracing, much of which we have talked about on this side from the world of professional graphics systems. Think Iray, OptiX, and more.

RTX will run on Volta GPUs only today, which does limit usefulness to gamers. With the only graphics card on the market even close to considered a gaming product the $3000 TITAN V, RTX is more of a forward-looking technology announcement for the company. We can obviously assume then that RTX technology will be integrated on any future consumer gaming graphics cards, be that a revision of Volta of something completely different. (NVIDIA refused to acknowledge plans for any pending Volta consumer GPUs during our meeting.)

The idea I get from NVIDIA is that today’s RTX is meant as a developer enablement platform, getting them used to the idea of adding ray tracing effects into their games and engines and to realize that NVIDIA provides the best hardware to get that done.

I’ll be honest with you – NVIDIA was light on the details of what RTX exactly IS and how it accelerates ray tracing. One very interesting example I was given was seen first with the AI-powered ray tracing optimizations for Optix from last year’s GDC. There, NVIDIA demonstrated that using the Volta Tensor cores it could run an AI-powered de-noiser on the ray traced image, effectively improving the quality of the resulting image and emulating much higher ray counts than are actually processed.

By using the Tensor cores with RTX for DXR implementation on the TITAN V, NVIDIA will be able to offer image quality and performance for ray tracing well ahead of even the TITAN Xp or GTX 1080 Ti as those GPUs do not have Tensor cores on-board. Does this mean that all (or flagship) consumer graphics cards from NVIDIA will includ Tensor cores to enable RTX performance? Obviously, NVIDIA wouldn’t confirm that but to me it makes sense that we will see that in future generations. The scale of Tensor core integration might change based on price points, but if NVIDIA and Microsoft truly believe in the future of ray tracing to augment and significantly replace rasterization methods, then it will be necessary.

Though that is one example of hardware specific features being used for RTX on NVIDIA hardware, it’s not the only one that is on Volta. But NVIDIA wouldn’t share more.

The relationship between Microsoft DirectX Raytracing and NVIDIA RTX is a bit confusing, but it’s easier to think of RTX as the underlying brand for the ability to ray trace on NVIDIA GPUs. The DXR API is still the interface between the game developer and the hardware, but RTX is what gives NVIDIA the advantage over AMD and its Radeon graphics cards, at least according to NVIDIA.

DXR will still run on other GPUS from NVIDIA that aren’t utilizing the Volta architecture. Microsoft says that any board that can support DX12 Compute will be able to run the new API. But NVIDIA did point out that in its mind, even with a high-end SKU like the GTX 1080 Ti, the ray tracing performance will limit the ability to integrate ray tracing features and enhancements in real-time game engines in the immediate timeframe. It’s not to say it is impossible, or that some engine devs might spend the time to build something unique, but it is interesting to hear NVIDIA infer that only future products will benefit from ray tracing in games.

It’s also likely that we are months if not a year or more from seeing good integration of DXR in games at retail. And it is also possible that NVIDIA is downplaying the importance of DXR performance today if it happens to be slower than the Vega 64 in the upcoming Futuremark benchmark release.

05.jpg

Alongside the RTX announcement comes GameWorks Ray Tracing, a colleciton of turnkey modules based on DXR. GameWorks has its own reputation, and we aren't going to get into that here, but NVIDIA wants to think of this addition to it as a way to "turbo charge enablement" of ray tracing effects in games.

NVIDIA believes that developers are incredibly excited for the implementation of ray tracing into game engines, and that the demos being shown at GDC this week will blow us away. I am looking forward to seeing them and for getting the reactions of major game devs on the release of Microsoft’s new DXR API. The performance impact of ray tracing will still be a hindrance to larger scale implementations, but with DXR driving the direction with a unified standard, I still expect to see some games with revolutionary image quality by the end of the year. 

Source: NVIDIA

HTC announces VIVE Pro Pricing, Available now for Preorder

Subject: General Tech, Graphics Cards | March 19, 2018 - 12:09 PM |
Tagged: vive pro, steamvr, rift, Oculus, Lighthouse, htc

Today, HTC has provided what VR enthusiasts have been eagerly waiting for since the announcement of the upgraded VIVE Pro headset earlier in the year at CESthe pricing and availability of the new device.

vivepro.png

Available for preorder today, the VIVE Pro will cost $799 for the headset-only upgrade. As we mentioned during the VIVE Pro announcement, this first upgrade kit is meant for existing VIVE users who will be reusing their original controllers and lighthouse trackers to get everything up and running.

The HMD-only kit, with it's upgraded resolution and optics, is set to start shipping very soon on April 3 and can be preordered now on the HTC website.

Additionally, your VIVE Pro purchase (through June 3rd, 2018) will come with a free six-month subscription to HTC's VIVEPORT subscription game service, which will gain you access to up to 5 titles per month for free (chosen from the VIVEPORT catalog of 400+ games.)

There is still no word on the pricing and availability of the full VIVE Pro kit including the updated Lighthouse 2.0 trackers, but it seems likely that it will come later in the Summer after the upgrade kit has saturated the market of current VIVE owners.

As far as system requirements go, the HTC site doesn't list any difference between the standard VIVE and the VIVE Pro. One change, however, is the lack of an HDMI port on the new VIVE Pro link box, so you'll need a graphics card with an open DisplayPort 1.2 connector. 

Source: HTC