Subject: Graphics Cards | March 21, 2018 - 09:37 PM | Ken Addison
Tagged: GDC, GDC 2018, nvidia, geforce experience, ansel, nvidia highlights, call of duty wwii, fortnite, pubg, tekken 7
Building upon the momentum of being included in the two most popular PC games in the world, PlayerUnknown's Battlegrounds and Fortnite, NVIDIA Highlights (previously known as ShadowPlay Highlights) is expanding to even more titles. Support for Call of Duty: WWII and Tekken 7 are available now, with Dying Light: Bad Blood and Escape from Tarkov coming soon.
For those unfamiliar with NVIDIA Highlights, it’s a feature that when integrated into a game, allows for the triggering of automatic screen recording when specific events happen. For example, think of the kill cam in Call of Duty. When enabled, Highlights will save a recording whenever the kill cam is triggered, allowing you to share exciting gameplay moments without having to think about it.
Animated GIF support has also been added to NVIDIA Highlights, allowing users to share shorter clips to platforms including Facebook, Google Photos, or Weibo.
In addition to supporting more games and formats, NVIDIA has also released the NVIDIA Highlights SDK, as well as plugins for Unreal Engine and Unity platforms. Previously, NVIDIA was working with developers to integrate Highlights into their games, but now developers will have the ability to add the support themselves.
Hopefully, these changes mean a quicker influx more titles with Highlights support, compared to the 16 currently supported titles.
In addition to enhancements in Highlights, NVIDIA has also launched a new sharing site for screen captures performed with the Ansel in-game photography tool.
The new ShotWithGeforce.com lets users upload and share their captures from any Ansel supported game.
Screenshots uploaded to Shot With GeForce are tagged with the specific game the capture is from, making it easy for users to scroll through all of the uploaded captures from a given title.
O Rayly? Ya Rayly. No Ray!
Microsoft has just announced a raytracing extension to DirectX 12, called DirectX Raytracing (DXR), at the 2018 Game Developer's Conference in San Francisco.
The goal is not to completely replace rasterization… at least not yet. This effect will be mostly implemented for effects that require supplementary datasets, such as reflections, ambient occlusion, and refraction. Rasterization, the typical way that 3D geometry gets drawn on a 2D display, converts triangle coordinates into screen coordinates, and then a point-in-triangle test runs across every sample. This will likely occur once per AA sample (minus pixels that the triangle can’t possibly cover -- such as a pixel outside of the triangle's bounding box -- but that's just optimization).
For rasterization, each triangle is laid on a 2D grid corresponding to the draw surface.
If any sample is in the triangle, the pixel shader is run.
This example shows the rotated grid MSAA case.
A program, called a pixel shader, is then run with some set of data that the GPU could gather on every valid pixel in the triangle. This set of data typically includes things like world coordinate, screen coordinate, texture coordinates, nearby vertices, and so forth. This lacks a lot of information, especially things that are not visible to the camera. The application is free to provide other sources of data for the shader to crawl… but what?
- Cubemaps are useful for reflections, but they don’t necessarily match the scene.
- Voxels are useful for lighting, as seen with NVIDIA’s VXGI and VXAO.
This is where DirectX Raytracing comes in. There’s quite a few components to it, but it’s basically a new pipeline that handles how rays are cast into the environment. After being queued, it starts out with a ray-generation stage, and then, depending on what happens to the ray in the scene, there are close-hit, any-hit, and miss shaders. Ray generation allows the developer to set up how the rays are cast, where they call an HLSL instrinsic instruction, TraceRay (which is a clever way of invoking them, by the way). This function takes an origin and a direction, so you can choose to, for example, cast rays only in the direction of lights if your algorithm was to, for instance, approximate partially occluded soft shadows from a non-point light. (There are better algorithms to do that, but it's just the first example that came off the top of my head.) The close-hit, any-hit, and miss shaders occur at the point where the traced ray ends.
To connect this with current technology, imagine that ray-generation is like a vertex shader in rasterization, where it sets up the triangle to be rasterized, leading to pixel shaders being called.
Even more interesting – the close-hit, any-hit, and miss shaders can call TraceRay themselves, which is used for multi-bounce and other recursive algorithms (see: figure above). The obvious use case might be reflections, which is the headline of the GDC talk, but they want it to be as general as possible, aligning with the evolution of GPUs. Looking at NVIDIA’s VXAO implementation, it also seems like a natural fit for a raytracing algorithm.
Speaking of data structures, Microsoft also detailed what they call the acceleration structure. Each object is composed of two levels. The top level contains per-object metadata, like its transformation and whatever else data that the developer wants to add to it. The bottom level contains the geometry. The briefing states, “essentially vertex and index buffers” so we asked for clarification. DXR requires that triangle geometry be specified as vertex positions in either 32-bit float3 or 16-bit float3 values. There is also a stride property, so developers can tweak data alignment and use their rasterization vertex buffer, as long as it's HLSL float3, either 16-bit or 32-bit.
As for the tools to develop this in…
Microsoft announced PIX back in January 2017. This is a debugging and performance analyzer for 64-bit, DirectX 12 applications. Microsoft will upgrade it to support DXR as soon as the API is released (specifically, “Day 1”). This includes the API calls, the raytracing pipeline resources, the acceleration structure, and so forth. As usual, you can expect Microsoft to support their APIs with quite decent – not perfect, but decent – documentation and tools. They do it well, and they want to make sure it’s available when the API is.
Example of DXR via EA's in-development SEED engine.
In short, raytracing is here, but it’s not taking over rasterization. It doesn’t need to. Microsoft is just giving game developers another, standardized mechanism to gather supplementary data for their games. Several game engines have already announced support for this technology, including the usual suspects of anything top-tier game technology:
- Frostbite (EA/DICE)
- SEED (EA)
- 3DMark (Futuremark)
- Unreal Engine 4 (Epic Games)
- Unity Engine (Unity Technologies)
They also said, “and several others we can’t disclose yet”, so this list is not even complete. But, yeah, if you have Frostbite, Unreal Engine, and Unity, then you have a sizeable market as it is. There is always a question about how much each of these engines will support the technology. Currently, raytracing is not portable outside of DirectX 12, because it’s literally being announced today, and each of these engines intend to support more than just Windows 10 and Xbox.
Still, we finally have a standard for raytracing, which should drive vendors to optimize in a specific direction. From there, it's just a matter of someone taking the risk to actually use the technology for a cool work of art.
If you want to read more, check out Ryan's post about the also-announced RTX, NVIDIA's raytracing technology.
Subject: Graphics Cards | March 19, 2018 - 01:00 PM | Ryan Shrout
Tagged: rtx, nvidia, dxr
The big news from the Game Developers Conference this week was Microsoft’s reveal of its work on a new ray tracing API for DirectX called DirectX Raytracing. As the name would imply, this is a new initiative to bring the image quality improvements of ray tracing to consumer hardware with the push of Microsoft’s DX team. Scott already has a great write up on that news and current and future implications of what it will mean for PC gamers, so I highly encourage you all to read that over before diving more into this NVIDIA-specific news.
- For those you that might need a history lesson on ray tracing and its growth, check out this three-part series that ran on PC Perspective as far back as 2006!
- Ray Tracing and Gaming - Quake 4: Ray Traced Project
- Rendering Games with Raytracing Will Revolutionize Graphics
- Ray Tracing and Gaming - One Year Later
Ray tracing has been the holy grail of real-time rendering. It is the gap between movies and games – though ray tracing continues to improve in performance it takes the power of offline server farms to render the images for your favorite flicks. Modern game engines continue to use rasterization, an efficient method for rendering graphics but one that depends on tricks and illusions to recreate the intended image. Ray tracing inherently solves the problems that rasterization works around including shadows, transparency, refraction, and reflection. But it does so at a prohibitive performance cost. But that will be changing with Microsoft’s enablement of ray tracing through a common API and technology like what NVIDIA has built to accelerate it.
Alongside support and verbal commitment to DXR, NVIDIA is announcing RTX Technology. This is a combination of hardware and software advances to improve the performance of ray tracing algorithms on its hardware and it works hand in hand with DXR. NVIDIA believes this is the culmination of 10 years of development on ray tracing, much of which we have talked about on this side from the world of professional graphics systems. Think Iray, OptiX, and more.
RTX will run on Volta GPUs only today, which does limit usefulness to gamers. With the only graphics card on the market even close to considered a gaming product the $3000 TITAN V, RTX is more of a forward-looking technology announcement for the company. We can obviously assume then that RTX technology will be integrated on any future consumer gaming graphics cards, be that a revision of Volta of something completely different. (NVIDIA refused to acknowledge plans for any pending Volta consumer GPUs during our meeting.)
The idea I get from NVIDIA is that today’s RTX is meant as a developer enablement platform, getting them used to the idea of adding ray tracing effects into their games and engines and to realize that NVIDIA provides the best hardware to get that done.
I’ll be honest with you – NVIDIA was light on the details of what RTX exactly IS and how it accelerates ray tracing. One very interesting example I was given was seen first with the AI-powered ray tracing optimizations for Optix from last year’s GDC. There, NVIDIA demonstrated that using the Volta Tensor cores it could run an AI-powered de-noiser on the ray traced image, effectively improving the quality of the resulting image and emulating much higher ray counts than are actually processed.
By using the Tensor cores with RTX for DXR implementation on the TITAN V, NVIDIA will be able to offer image quality and performance for ray tracing well ahead of even the TITAN Xp or GTX 1080 Ti as those GPUs do not have Tensor cores on-board. Does this mean that all (or flagship) consumer graphics cards from NVIDIA will includ Tensor cores to enable RTX performance? Obviously, NVIDIA wouldn’t confirm that but to me it makes sense that we will see that in future generations. The scale of Tensor core integration might change based on price points, but if NVIDIA and Microsoft truly believe in the future of ray tracing to augment and significantly replace rasterization methods, then it will be necessary.
Though that is one example of hardware specific features being used for RTX on NVIDIA hardware, it’s not the only one that is on Volta. But NVIDIA wouldn’t share more.
The relationship between Microsoft DirectX Raytracing and NVIDIA RTX is a bit confusing, but it’s easier to think of RTX as the underlying brand for the ability to ray trace on NVIDIA GPUs. The DXR API is still the interface between the game developer and the hardware, but RTX is what gives NVIDIA the advantage over AMD and its Radeon graphics cards, at least according to NVIDIA.
DXR will still run on other GPUS from NVIDIA that aren’t utilizing the Volta architecture. Microsoft says that any board that can support DX12 Compute will be able to run the new API. But NVIDIA did point out that in its mind, even with a high-end SKU like the GTX 1080 Ti, the ray tracing performance will limit the ability to integrate ray tracing features and enhancements in real-time game engines in the immediate timeframe. It’s not to say it is impossible, or that some engine devs might spend the time to build something unique, but it is interesting to hear NVIDIA infer that only future products will benefit from ray tracing in games.
It’s also likely that we are months if not a year or more from seeing good integration of DXR in games at retail. And it is also possible that NVIDIA is downplaying the importance of DXR performance today if it happens to be slower than the Vega 64 in the upcoming Futuremark benchmark release.
Alongside the RTX announcement comes GameWorks Ray Tracing, a colleciton of turnkey modules based on DXR. GameWorks has its own reputation, and we aren't going to get into that here, but NVIDIA wants to think of this addition to it as a way to "turbo charge enablement" of ray tracing effects in games.
NVIDIA believes that developers are incredibly excited for the implementation of ray tracing into game engines, and that the demos being shown at GDC this week will blow us away. I am looking forward to seeing them and for getting the reactions of major game devs on the release of Microsoft’s new DXR API. The performance impact of ray tracing will still be a hindrance to larger scale implementations, but with DXR driving the direction with a unified standard, I still expect to see some games with revolutionary image quality by the end of the year.
Subject: General Tech | March 8, 2018 - 03:26 PM | Jeremy Hellstrom
Tagged: dirty pool, nvidia, gpp, GeForce Partner Program
[H]ard|OCP have posted an article looking at the brand new GeForce Partner Program which NVIDIA has announced that has a striking resemblance to a certain Intel initiative ... which turned out poorly. After investigating the details for several weeks, including attempts to talk with OEMs and AIBs some serious concerns have been raised, including what seems to be a membership requirement to only sell NVIDIA GPUs in a product line which is aligned with GPP. As membership to the GPP offers "high-effort engineering engagements -- early tech engagement -- launch partner status -- game bundling -- sales rebate programs -- social media and PR support -- marketing reports -- Marketing Development Funds (MDF)" this would cut out a company which chose to sell competitors products from quite a few things.
At this time NVIDIA has not responded to inquiries and the OEMs and AIBs which [H] spoke to declined to make any official comments; off the record there were serious concerns about the legality of this project. Expect to hear more about this from various sites as they seek the transparency which NVIDIA Director John Teeple mentioned in his post.
"While we usually like to focus on all the wonderful and immersive worlds that video cards and their GPUs can open up to us, today we are tackling something a bit different. The GeForce Partner Program, known as GPP in the industry, is a "marketing" program that looks to HardOCP as being an anticompetitive tactic against AMD and Intel."
Here is some more Tech News from around the web:
- Women of Infosec call bullsh*t on RSA's claim it could only find one female speaker @ The Register
- Oculus Rift headsets go to Borksville as security certificate expires @ The Inquirer
- Windows 10 S Mode can be switched off for free, Microsoft confirms @ The Inquirer
Subject: General Tech | March 4, 2018 - 04:55 PM | Scott Michaud
Tagged: Blender, Volta, nvidia
Normally the “a” patch of Blender arrives much closer to the number release – about a month or so.
Five months after 2.79, however, the Blender Foundation has released 2.79a. It seemed likely that it would happen at some point, because it looks like they are aiming for 2.80 to be the next full release, and that will take some time. I haven’t had a chance to use 2.79a yet, but the release notes are mostly bug fixes and performance improvements.
Glancing through the release notes, one noteworthy edition is that Blender 2.79a now includes the CUDA 9 SDK in its build process, and it includes work-arounds for “performance loss” with those devices. While I haven’t heard any complaints from Titan V owners, the lack of CUDA 8 SDK was a big problem for early owners of GeForce GTX 10X0 cards, so Volta users might have been suffering in silence until now. If you were having issues with the Titan V, then you should try 2.79a.
If you’re interested, be sure to check out the latest release. As always, it’s free.
Subject: Graphics Cards | March 4, 2018 - 02:02 PM | Scott Michaud
Tagged: nvidia, hotfix, graphics drivers
NVIDIA has published a hotfix driver, 391.05, for a few issues that didn’t make it into the recently released 391.01 WHQL version. Specifically, if you are experiencing any of the following issues, then you can go to the NVIDIA forums and follow the link to their associated CustHelp page:
- NVIDIA Freestyle stopped working
- Display corruption on Titan V
- Support for Microsoft Surface Book notebooks
While improved support for the Titan V and the Microsoft Surface Book is very important for anyone who owns those devices, NVIDIA Freestyle is an interesting one for the masses. NVIDIA allows users to hook the post processing stage of various supported games and inject their own effects. The feature launched in January and it is still in beta, but lead users still want it to work of course. If you were playing around with this feature and it stopped working on 390-based drivers, then check out this hotfix.
For the rest of us? Probably a good idea to stay on the official drivers. Hotfixes have reduced QA, so it’s possible that other bugs were introduced in the process.
Subject: Graphics Cards | February 28, 2018 - 09:04 PM | Ryan Shrout
Tagged: bitmain, bitcoin, qualcomm, nvidia, amd
This article originally appeared in MarketWatch.
Research firm Bernstein recently published a report on the profitability of Bitmain Technologies, a secretive Chinese company with a huge impact on the bitcoin and cryptocurrency markets.
With estimated 2017 profits ranging from $3 billion to $4 billion, the size and scope of Beijing-based Bitmain is undeniable, with annual net income higher than some major tech players, including Nvidia and AMD. The privately held company, founded five years ago, has expanded its reach into many bitcoin-based markets, but most of its income stems from the development and sale of dedicated cryptocurrency mining hardware.
There is a concern that the sudden introduction of additional companies in the chip-production landscape could alter how other players operate. This includes the ability for Nvidia, AMD, Qualcomm and others to order chip production from popular semiconductor vendors at the necessary prices to remain competitive in their respective markets.
Bitmain makes most of its income through the development of dedicated chips used to mine bitcoin. These ASICs (application-specific integrated circuits) offer better performance and power efficiency than other products such as graphics chips from Nvidia and AMD. The Bitmain chips are then combined into systems called “miners” that can include as many as 250 chips in a single unit. Those are sold to large mining companies or individuals hoping to turn a profit from the speculative cryptocurrency markets for prices ranging from a few hundred to a few thousand dollars apiece.
Bitcoin mining giant
Bernstein estimates that as much as 70%-80% of the dedicated market for bitcoin mining is being addressed by Bitmain and its ASIC sales.
Bitmain has secondary income sources, including running mining pools (where groups of bitcoin miners share the workload of computing in order to turn a profit sooner) and cloud-based mining services where customers can simply rent mining hardware that exists in a dedicated server location. This enables people to attempt to profit from mining without the expense of buying hardware directly.
A Bitmain Antminer
The chip developer and mining hardware giant has key advantages for revenue growth and stability, despite the volatility of the cryptocurrency market. When Bitmain designs a new ASIC that can address a new currency or algorithm, or run a current coin algorithm faster than was previously possible, it can choose to build its Antminers (the brand for these units) and operate them at its own server farms, squeezing the profitability and advantage the faster chips offer on the bitcoin market before anyone else in the ecosystem has access to them.
As the difficulty of mining increases (which occurs as higher-performance mining options are released, lowering the profitability of older hardware), Bitmain can then start selling the new chips and associated Antminers to customers, moving revenue from mining directly to sales of mining hardware.
This pattern can be repeated for as long as chip development continues, giving Bitmain a tremendous amount of flexibility to balance revenue from different streams.
Imagine a situation where one of the major graphics chip vendors exclusively used its latest graphics chips for its own services like cloud-compute, crypto-mining and server-based rendering and how much more valuable those resources would be — that is the power that Bitmain holds over the bitcoin market.
Competing for foundry business
Clearly Bitmain is big business, and its impact goes well beyond just the bitcoin space. Because its dominance for miners depends on new hardware designs and chip production, where performance and power efficiency are critical to building profitable hardware, it competes for the same foundry business as other fabless semiconductor giants. That includes Apple, Nvidia, Qualcomm, AMD and others.
Companies that build ASICs as part of their business model, including Samsung, TSMC, GlobalFoundries and even Intel to a small degree, look for customers willing to bid the most for the limited availability of production inventory. Bitmain is not restricted to a customer base that is cost-sensitive — instead, its customers are profit-sensitive. As long as the crypto market remains profitable, Bitmain can absorb the added cost of chip production.
Advantages over Nvidia, AMD and Qualcomm
Nvidia, AMD and Qualcomm are not as flexible. Despite the fact that Nvidia can charge thousands for some of its most powerful graphics chips when targeting the enterprise and machine-learning market, the wider gaming market is more sensitive to price changes. You can see that in the unrest that has existed in the gaming space as the price of graphics cards rises due to inventory going to miners rather than gamers. Neither AMD nor Nvidia will get away with selling graphic cards to partners for higher prices and, as a result, there is a potential for negative market growth in PC gaming.
If Bitmain uses the same foundry as others, and is willing to pay more for it to build their chips at a higher priority than other fabless semiconductor companies, then it could directly affect the availability and pricing for graphics chips, mobile phone processors and anything else built at those facilities. As a result, not only does the cryptocurrency market have an effect on the current graphics chip market for gamers by causing shortages, but it could also impact future chip availability if Bitmain (and its competitors) are willing to spend more for the advanced process technologies coming in 2018 and beyond.
Still, nothing is certain in the world of bitcoin and cryptocurrency. The fickle and volatile market means the profitability of Bitmain’s Antminers could be reduced, lessening the drive to pay more for chips and production. There is clearly an impact from sudden bitcoin value drops (from $20,000 to $6,000 as we see saw this month) on mining hardware sales, both graphics chip-based and ASIC-based, but measuring that and predicting it is a difficult venture.
Subject: General Tech | February 19, 2018 - 12:59 PM | Jeremy Hellstrom
Tagged: Nintendo Switch, nvidia, Tegra X1
Sometimes a flaw in a chips design can be used for good, for instance a flaw in Nvidia's Tegra X1 chip which allows a successful install of Linux. The flaw is in the firmware, so Nintendo will not be pushing a fix out that will disable this feature on current Switches. For now, those who have managed this trick are not sharing so you will have to wait to try to fry your own Switch for now. As The Inquirer points out, this is not a terrible issue as the Linux based Switch still needs work to enable you to play anything on it, be it Switch games, legacy Nintendo or Steam.
"NOT CONTENT with simply getting Linux to boot on the Nintendo Switch, the hacker folks over at fail0verflow have managed to get the hybrid console to behave like a full-fat Linux PC."
Here is some more Tech News from around the web:
- Chrome Extension Brings 'View Image' Button Back @ Slashdot
- Guidemaster: Smartwatches worthy of replacing your favorite timepiece @ Ars Technica
- Microsoft fixes limitations of Windows 10 on ARM by deleting any mention of them @ The Inquirer
- If you don't like what IBM is pitching, blame Watson: It's generating sales 'solutions' now @ The Register
- Oh sh-itcoin! Crypto-dosh swap-shop Coinbase empties punters' bank accounts @ The Register
- Google Exposes How Malicious Sites Can Exploit Microsoft Edge @ Slashdot
- When it absolutely, positively needs to be leaked overnight: 120k FedEx customer files spill from AWS S3 silo @ The Register
- Zhiyun Crane 2 Gimbal @ TechPowerUp
Subject: Graphics Cards | February 18, 2018 - 02:54 PM | Scott Michaud
Tagged: opengl, nvidia, metal, macos, apple
Just two days ago, NVIDIA has published a job posting for a software engineer to “implement and extend 3D graphics and Metal”. Given that they specify the Metal API, and they want applicants who are “Experienced with OSX and/or Linux operating systems”, it seems clear that this job would involve macOS and/or iOS.
First, if this appeals to any of our readers, the job posting is here.
Second, and this is where it gets potentially news-worthy, is that NVIDIA hasn’t really done a whole lot on Apple platforms for a while. The most recent NVIDIA GPU to see macOS is the GeForce GTX 680. It’s entirely possible that NVIDIA needs someone to fill in and maintain those old components. If that’s the case? Business as usual. Nothing to see here.
The other possibility is that NVIDIA might be expecting a design win with Apple. What? Who knows. It could be something as simple as Apple’s external GPU architecture allowing the user to select their own add-in board. Alternatively, Apple could have selected an NVIDIA GPU for one or more product lines, which they have not done since 2013 (as far as I can tell).
Apple typically makes big announcements at WWDC, which is expected in early June, or around the back-to-school season in September. I’m guessing we’ll know by then at the latest if something is in the works.
Subject: General Tech | February 15, 2018 - 11:32 AM | Ken Addison
Tagged: podcast, Intel, amd, nvidia, raven ridge, r5 2400g, r3 2200g, arm, project trillium, qualcomm, snapdragon 845, x24, LTE, 5G
PC Perspective Podcast #487 - 02/15/18
Join us this week for a recap of news and reviews including new AMD Desktop APUs, Snapdragon 845 Performance Preview, ARM Machine Learning, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the iTunes Store (audio only)
- Google Play - Subscribe to our audio podcast directly through Google Play!
- RSS - Subscribe through your regular RSS reader (audio only)
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano
Peanut Gallery: Alex Lustenberg, Ken Addison
Program length: 1:18:46
Podcast topics of discussion:
Week in Review:
News items of interest:
Picks of the Week: