Eight-GPU SLI in Unreal Engine 4 (Yes There Is a Catch)

Subject: Graphics Cards | March 29, 2018 - 09:52 PM |
Tagged: nvidia, GTC, gp102, quadro p6000

At GTC 2018, Walt Disney Imagineering unveiled a work-in-progress clip of their upcoming Star Wars: Galaxy’s Edge attraction, which is expected to launch next year at Disneyland and Walt Disney World Resort. The cool part about this ride is that it will be using Unreal Engine 4 with eight, GP102-based Quadro P6000 graphics cards. NVIDIA also reports that Disney has donated the code back to Epic Games to help them with their multi-GPU scaling in general – a win for us consumers… in a more limited fashion.

nvidia-2018-GTC-starwars-8-way-sli.jpg

See? SLI doesn’t need to be limited to two cards if you have a market cap of $100 billion USD.

Another interesting angle to this story is how typical PC components are contributing to these large experiences. Sure, Quadro hardware isn’t exactly cheap, but it can be purchased through typical retail channels and it allows the company to focus their engineering time elsewhere.

Ironically, this also comes about two decades after location-based entertainment started to decline… but, you know, it’s Disneyland and Disney World. They’re fine.

Source: NVIDIA

ASRock Enters Graphics Card Market With Phantom Gaming Series of AMD GPUs

Subject: Graphics Cards | March 29, 2018 - 05:45 PM |
Tagged: RX 580, RX 570, RX 560, RX 550, Polaris, mining, asrock, amd

ASRock, a company known mostly for its motherboards that was formerly an Asus sub-brand but is now an independent company owned by Pegatron since 2010 is now getting into the graphics card market with a new Phantom Gaming series. At launch, the Phantom Gaming series is comprised of four AMD Polaris-based graphics cards including the Phantom Gaming RX 550 2G and RX 560 2G on the low end and the Phantom Gaming X RX 570 8G OC and RX 580 8G OC on the mid/high end range.

Phantom Gaming X Radeon RX580 8G OC(L4).png

ASRock is using black shrouds with white accents and silver and red logos. The lower end Phantom Gaming cards utilize a single dual ball bearing fan while the Phantom Gaming X cards use a dual fan configuration. ASRock is using copper baseplates paired with aluminum heatsinks and composite heatpipes. The Phantom Gaming RX 550 and RX 560 cards use only PCI-E slot power while the Phantom Gaming X RX 570 and RX 580 cards get power from both the slot and a single 8-pin PCI-E power connector.

Video outputs include one HDMI 2.0, one DisplayPort 1.4, and one DL-DVI-D on the Phantom Gaming parts and one HDMI 2.0, three DisplayPort 1.4, and one DL-DVI-D on the higher-end Phantom Gaming X graphics cards. All of the graphics card models feature both silent and overclocked modes in addition to their out-of-the-box default clocks depending on whether you value performance or noise. Users can select which mode they want or perform a custom overclock or fan curve using ASRock's Phantom Gaming Tweak utility.

On the performance front, out of the box ASRock is slightly overclocking the Phantom Gaming X OC cards (the RX 570 and RX 580 based ones) and slightly underclocking the lower end Phantom Gaming cards (including the memory which is downclocked to 6 GHz) compared to their AMD reference specifications.

  ASRock RX 580 OC RX 580 ASRock RX 570 OC RX 570 ASRock RX 560 RX 560 ASRock RX 550 RX 550
Cores 2304 2304 2048 2048 896 896 512 512
GPU Clock (MHz) 1380 1340 1280 1244 1149 1275 1100 1183
GPU Clock OC Mode (MHz) 1435 - 1331 - 1194 - 1144 -
Memory (GDDR5) 8GB 8GB 8GB 8GB 2GB 2GB/4GB 2GB 2GB/4GB
Memory Clock (GHz) 8GHz 8GHz 7GHz 7GHz 6GHz 7GHz 6GHz 7GHz
Memory Clock OC Mode (MHz) 8320 - 7280 - 6240 - 6240 -
Texture Units 144 144 128 128 64 64 32 32
ROPs 32 32 32 32 16 16 16 16

The table above shows the comparisons between the ASRock graphics cards and their AMD reference card counterparts. Note that the Phantom Gaming RX 560 2G is based on the cut-down 14 CU (compute unit) model rather than the launch 16 CU GPU. Also, even in OC Mode, ASRock does not bring the memory up to the 7 GT/s reference spec. On the positive side, turning on OC mode does give a decent factory overclock of the GPU over reference. Also nice to see is that on the higher end "OC Certified" Phantom Gaming X cards, ASRock overclocks both the GPU and memory speeds which is often not the case with factory overclocks.

Phantom Gaming Radeon RX550 2G(L1).png

ASRock did not detail pricing with any of the launch announcement cards, but they should be coming soon with 4GB models of the RX 560 an RX 550 to follow later this year.

It is always nice to have more competition in this space and hopefully a new AIB partner for AMD helps alleviate shortages and demand for gaming cards if only by a bit. I am curious how well the cards will perform as while they look good on paper the company is new to graphics cards and the build quality really needs to be there. I am just hoping that the Phantom Gaming moniker is not an allusion to how hard these cards are going to be to find for gaming! (heh) If the rumored Ethereum ASICs do not kill the demand for AMD GPUs I do expect that ASRock will also be releasing mining specific cards as well at some point.

What are your thoughts on the news of ASRock moving into graphics cards?

Also read:

Source: Tech Report
Author:
Manufacturer: Intel

System Overview

Announced at Intel's Developer Forum in 2012, and launched later that year, the Next Unit of Computing (NUC) project was initially a bit confusing to the enthusiast PC press. In a market that appeared to be discarding traditional desktops in favor of notebooks, it seemed a bit odd to launch a product that still depended on a monitor, mouse, and keyboard, yet didn't provide any more computing power.

Despite this criticism, the NUC lineup has rapidly expanded over the years, seeing success in areas such as digital signage and enterprise environments. However, the enthusiast PC market has mostly eluded the lure of the NUC.

Intel's Skylake-based Skull Canyon NUC was the company's first attempt to cater to the enthusiast market, with a slight stray from the traditional 4-in x 4-in form factor and the adoption of their best-ever integrated graphics solution in the Iris Pro. Additionally, the ability to connect external GPUs via Thunderbolt 3 meant Skull Canyon offered more of a focus on high-end PC graphics. 

However, Skull Canyon mostly fell on deaf ears among hardcore PC users, and it seemed that Intel lacked the proper solution to make a "gaming-focused" NUC device—until now.

8th Gen Intel Core processor.jpg

Announced at CES 2018, the lengthily named 8th Gen Intel® Core™ processors With Radeon™ RX Vega M Graphics (henceforth referred to as the code name, Kaby Lake-G) marks a new direction for Intel. By partnering with one of the leaders in high-end PC graphics, AMD, Intel can now pair their processors with graphics capable of playing modern games at high resolutions and frame rates.

DSC04773.JPG

The first product to launch using the new Kaby Lake-G family of processors is Intel's own NUC, the NUC8i7HVK (Hades Canyon). Will the marriage of Intel and AMD finally provide a NUC capable of at least moderate gaming? Let's dig a bit deeper and find out.

Click here to continue reading our review of the Intel Hades Canyon NUC!

GDC 2018: NVIDIA Adds new Ansel and Highlights features to GeForce Experience

Subject: Graphics Cards | March 21, 2018 - 09:37 PM |
Tagged: GDC, GDC 2018, nvidia, geforce experience, ansel, nvidia highlights, call of duty wwii, fortnite, pubg, tekken 7

Building upon the momentum of being included in the two most popular PC games in the world, PlayerUnknown's Battlegrounds and Fortnite, NVIDIA Highlights (previously known as ShadowPlay Highlights) is expanding to even more titles. Support for Call of Duty: WWII and Tekken 7 are available now, with Dying Light: Bad Blood and Escape from Tarkov coming soon.

For those unfamiliar with NVIDIA Highlights, it’s a feature that when integrated into a game, allows for the triggering of automatic screen recording when specific events happen. For example, think of the kill cam in Call of Duty. When enabled, Highlights will save a recording whenever the kill cam is triggered, allowing you to share exciting gameplay moments without having to think about it.

Animated GIF support has also been added to NVIDIA Highlights, allowing users to share shorter clips to platforms including Facebook, Google Photos, or Weibo.

In addition to supporting more games and formats, NVIDIA has also released the NVIDIA Highlights SDK, as well as plugins for Unreal Engine and Unity platforms. Previously, NVIDIA was working with developers to integrate Highlights into their games, but now developers will have the ability to add the support themselves.

Hopefully, these changes mean a quicker influx more titles with Highlights support, compared to the 16 currently supported titles.

In addition to enhancements in Highlights, NVIDIA has also launched a new sharing site for screen captures performed with the Ansel in-game photography tool.

The new ShotWithGeforce.com lets users upload and share their captures from any Ansel supported game.

shotwithgeforce.PNG

Screenshots uploaded to Shot With GeForce are tagged with the specific game the capture is from, making it easy for users to scroll through all of the uploaded captures from a given title.

Source: NVIDIA
Manufacturer: Microsoft

O Rayly? Ya Rayly. No Ray!

Microsoft has just announced a raytracing extension to DirectX 12, called DirectX Raytracing (DXR), at the 2018 Game Developer's Conference in San Francisco.

microsoft-2015-directx12-logo.jpg

The goal is not to completely replace rasterization… at least not yet. This effect will be mostly implemented for effects that require supplementary datasets, such as reflections, ambient occlusion, and refraction. Rasterization, the typical way that 3D geometry gets drawn on a 2D display, converts triangle coordinates into screen coordinates, and then a point-in-triangle test runs across every sample. This will likely occur once per AA sample (minus pixels that the triangle can’t possibly cover -- such as a pixel outside of the triangle's bounding box -- but that's just optimization).

microsoft-2018-gdc-directx12raytracing-rasterization.png

For rasterization, each triangle is laid on a 2D grid corresponding to the draw surface.
If any sample is in the triangle, the pixel shader is run.
This example shows the rotated grid MSAA case.

A program, called a pixel shader, is then run with some set of data that the GPU could gather on every valid pixel in the triangle. This set of data typically includes things like world coordinate, screen coordinate, texture coordinates, nearby vertices, and so forth. This lacks a lot of information, especially things that are not visible to the camera. The application is free to provide other sources of data for the shader to crawl… but what?

  • Cubemaps are useful for reflections, but they don’t necessarily match the scene.
  • Voxels are useful for lighting, as seen with NVIDIA’s VXGI and VXAO.

This is where DirectX Raytracing comes in. There’s quite a few components to it, but it’s basically a new pipeline that handles how rays are cast into the environment. After being queued, it starts out with a ray-generation stage, and then, depending on what happens to the ray in the scene, there are close-hit, any-hit, and miss shaders. Ray generation allows the developer to set up how the rays are cast, where they call an HLSL instrinsic instruction, TraceRay (which is a clever way of invoking them, by the way). This function takes an origin and a direction, so you can choose to, for example, cast rays only in the direction of lights if your algorithm was to, for instance, approximate partially occluded soft shadows from a non-point light. (There are better algorithms to do that, but it's just the first example that came off the top of my head.) The close-hit, any-hit, and miss shaders occur at the point where the traced ray ends.

To connect this with current technology, imagine that ray-generation is like a vertex shader in rasterization, where it sets up the triangle to be rasterized, leading to pixel shaders being called.

microsoft-2018-gdc-directx12raytracing-multibounce.png

Even more interesting – the close-hit, any-hit, and miss shaders can call TraceRay themselves, which is used for multi-bounce and other recursive algorithms (see: figure above). The obvious use case might be reflections, which is the headline of the GDC talk, but they want it to be as general as possible, aligning with the evolution of GPUs. Looking at NVIDIA’s VXAO implementation, it also seems like a natural fit for a raytracing algorithm.

Speaking of data structures, Microsoft also detailed what they call the acceleration structure. Each object is composed of two levels. The top level contains per-object metadata, like its transformation and whatever else data that the developer wants to add to it. The bottom level contains the geometry. The briefing states, “essentially vertex and index buffers” so we asked for clarification. DXR requires that triangle geometry be specified as vertex positions in either 32-bit float3 or 16-bit float3 values. There is also a stride property, so developers can tweak data alignment and use their rasterization vertex buffer, as long as it's HLSL float3, either 16-bit or 32-bit.

As for the tools to develop this in…

microsoft-2018-gdc-PIX.png

Microsoft announced PIX back in January 2017. This is a debugging and performance analyzer for 64-bit, DirectX 12 applications. Microsoft will upgrade it to support DXR as soon as the API is released (specifically, “Day 1”). This includes the API calls, the raytracing pipeline resources, the acceleration structure, and so forth. As usual, you can expect Microsoft to support their APIs with quite decent – not perfect, but decent – documentation and tools. They do it well, and they want to make sure it’s available when the API is.

ea-2018-SEED screenshot (002).png

Example of DXR via EA's in-development SEED engine.

In short, raytracing is here, but it’s not taking over rasterization. It doesn’t need to. Microsoft is just giving game developers another, standardized mechanism to gather supplementary data for their games. Several game engines have already announced support for this technology, including the usual suspects of anything top-tier game technology:

  • Frostbite (EA/DICE)
  • SEED (EA)
  • 3DMark (Futuremark)
  • Unreal Engine 4 (Epic Games)
  • Unity Engine (Unity Technologies)

They also said, “and several others we can’t disclose yet”, so this list is not even complete. But, yeah, if you have Frostbite, Unreal Engine, and Unity, then you have a sizeable market as it is. There is always a question about how much each of these engines will support the technology. Currently, raytracing is not portable outside of DirectX 12, because it’s literally being announced today, and each of these engines intend to support more than just Windows 10 and Xbox.

Still, we finally have a standard for raytracing, which should drive vendors to optimize in a specific direction. From there, it's just a matter of someone taking the risk to actually use the technology for a cool work of art.

If you want to read more, check out Ryan's post about the also-announced RTX, NVIDIA's raytracing technology.

Manufacturer: Microsoft

It's all fun and games until something something AI.

Microsoft announced the Windows Machine Learning (WinML) API about two weeks ago, but they did so in a sort-of abstract context. This week, alongside the 2018 Game Developers Conference, they are grounding it in a practical application: video games!

microsoft-2018-winml-graphic.png

Specifically, the API provides the mechanisms for game developers to run inference on the target machine. The training data that it runs against would be in the Open Neural Network Exchange (ONNX) format from Microsoft, Facebook, and Amazon. Like the initial announcement suggests, it can be used for any application, not just games, but… you know. If you want to get a technology off the ground, and it requires a high-end GPU, then video game enthusiasts are good lead users. When run in a DirectX application, WinML kernels are queued on the DirectX 12 compute queue.

We’ve discussed the concept before. When you’re rendering a video game, simulating an accurate scenario isn’t your goal – the goal is to look like you are. The direct way of looking like you’re doing something is to do it. The problem is that some effects are too slow (or, sometimes, too complicated) to correctly simulate. In these cases, it might be viable to make a deep-learning AI hallucinate a convincing result, even though no actual simulation took place.

Fluid dynamics, global illumination, and up-scaling are three examples.

Previously mentioned SIGGRAPH demo of fluid simulation without fluid simulation...
... just a trained AI hallucinating a scene based on input parameters.

Another place where AI could be useful is… well… AI. One way of making AI is to give it some set of data from the game environment, often including information that a player in its position would not be able to know, and having it run against a branching logic tree. Deep learning, on the other hand, can train itself on billions of examples of good and bad play, and make results based on input parameters. While the two methods do not sound that different, the difference between logic being designed (vs logic being assembled from an abstract good/bad dataset) someone abstracts the potential for assumptions and programmer error. Of course, it abstracts that potential for error into the training dataset, but that’s a whole other discussion.

The third area that AI could be useful is when you’re creating the game itself.

There’s a lot of grunt and grind work when developing a video game. Licensing prefab solutions (or commissioning someone to do a one-off asset for you) helps ease this burden, but that gets expensive in terms of both time and money. If some of those assets could be created by giving parameters to a deep-learning AI, then those are assets that you would not need to make, allowing you to focus on other assets and how they all fit together.

These are three of the use cases that Microsoft is aiming WinML at.

nvidia-2018-deeplearningcarupscale.png

Sure, these are smooth curves of large details, but the antialiasing pattern looks almost perfect.

For instance, Microsoft is pointing to an NVIDIA demo where they up-sample a photo of a car, once with bilinear filtering and once with a machine learning algorithm (although not WinML-based). The bilinear algorithm behaves exactly as someone who has used Photoshop would expect. The machine learning algorithm, however, was able to identify the objects that the image intended to represent, and it drew the edges that it thought made sense.

microsoft-2018-gdc-PIX.png

Like their DirectX Raytracing (DXR) announcement, Microsoft plans to have PIX support WinML “on Day 1”. As for partners? They are currently working with Unity Technologies to provide WinML support in Unity’s ML-Agents plug-in. That’s all the game industry partners they have announced at the moment, though. It’ll be interesting to see who jumps in and who doesn’t over the next couple of years.

NVIDIA RTX Technology Accelerates Ray Tracing for Microsoft DirectX Raytracing API

Subject: Graphics Cards | March 19, 2018 - 01:00 PM |
Tagged: rtx, nvidia, dxr

The big news from the Game Developers Conference this week was Microsoft’s reveal of its work on a new ray tracing API for DirectX called DirectX Raytracing. As the name would imply, this is a new initiative to bring the image quality improvements of ray tracing to consumer hardware with the push of Microsoft’s DX team. Scott already has a great write up on that news and current and future implications of what it will mean for PC gamers, so I highly encourage you all to read that over before diving more into this NVIDIA-specific news.

Ray tracing has been the holy grail of real-time rendering. It is the gap between movies and games – though ray tracing continues to improve in performance it takes the power of offline server farms to render the images for your favorite flicks. Modern game engines continue to use rasterization, an efficient method for rendering graphics but one that depends on tricks and illusions to recreate the intended image. Ray tracing inherently solves the problems that rasterization works around including shadows, transparency, refraction, and reflection. But it does so at a prohibitive performance cost. But that will be changing with Microsoft’s enablement of ray tracing through a common API and technology like what NVIDIA has built to accelerate it.

04.jpg

Alongside support and verbal commitment to DXR, NVIDIA is announcing RTX Technology. This is a combination of hardware and software advances to improve the performance of ray tracing algorithms on its hardware and it works hand in hand with DXR. NVIDIA believes this is the culmination of 10 years of development on ray tracing, much of which we have talked about on this side from the world of professional graphics systems. Think Iray, OptiX, and more.

RTX will run on Volta GPUs only today, which does limit usefulness to gamers. With the only graphics card on the market even close to considered a gaming product the $3000 TITAN V, RTX is more of a forward-looking technology announcement for the company. We can obviously assume then that RTX technology will be integrated on any future consumer gaming graphics cards, be that a revision of Volta of something completely different. (NVIDIA refused to acknowledge plans for any pending Volta consumer GPUs during our meeting.)

The idea I get from NVIDIA is that today’s RTX is meant as a developer enablement platform, getting them used to the idea of adding ray tracing effects into their games and engines and to realize that NVIDIA provides the best hardware to get that done.

I’ll be honest with you – NVIDIA was light on the details of what RTX exactly IS and how it accelerates ray tracing. One very interesting example I was given was seen first with the AI-powered ray tracing optimizations for Optix from last year’s GDC. There, NVIDIA demonstrated that using the Volta Tensor cores it could run an AI-powered de-noiser on the ray traced image, effectively improving the quality of the resulting image and emulating much higher ray counts than are actually processed.

By using the Tensor cores with RTX for DXR implementation on the TITAN V, NVIDIA will be able to offer image quality and performance for ray tracing well ahead of even the TITAN Xp or GTX 1080 Ti as those GPUs do not have Tensor cores on-board. Does this mean that all (or flagship) consumer graphics cards from NVIDIA will includ Tensor cores to enable RTX performance? Obviously, NVIDIA wouldn’t confirm that but to me it makes sense that we will see that in future generations. The scale of Tensor core integration might change based on price points, but if NVIDIA and Microsoft truly believe in the future of ray tracing to augment and significantly replace rasterization methods, then it will be necessary.

Though that is one example of hardware specific features being used for RTX on NVIDIA hardware, it’s not the only one that is on Volta. But NVIDIA wouldn’t share more.

The relationship between Microsoft DirectX Raytracing and NVIDIA RTX is a bit confusing, but it’s easier to think of RTX as the underlying brand for the ability to ray trace on NVIDIA GPUs. The DXR API is still the interface between the game developer and the hardware, but RTX is what gives NVIDIA the advantage over AMD and its Radeon graphics cards, at least according to NVIDIA.

DXR will still run on other GPUS from NVIDIA that aren’t utilizing the Volta architecture. Microsoft says that any board that can support DX12 Compute will be able to run the new API. But NVIDIA did point out that in its mind, even with a high-end SKU like the GTX 1080 Ti, the ray tracing performance will limit the ability to integrate ray tracing features and enhancements in real-time game engines in the immediate timeframe. It’s not to say it is impossible, or that some engine devs might spend the time to build something unique, but it is interesting to hear NVIDIA infer that only future products will benefit from ray tracing in games.

It’s also likely that we are months if not a year or more from seeing good integration of DXR in games at retail. And it is also possible that NVIDIA is downplaying the importance of DXR performance today if it happens to be slower than the Vega 64 in the upcoming Futuremark benchmark release.

05.jpg

Alongside the RTX announcement comes GameWorks Ray Tracing, a colleciton of turnkey modules based on DXR. GameWorks has its own reputation, and we aren't going to get into that here, but NVIDIA wants to think of this addition to it as a way to "turbo charge enablement" of ray tracing effects in games.

NVIDIA believes that developers are incredibly excited for the implementation of ray tracing into game engines, and that the demos being shown at GDC this week will blow us away. I am looking forward to seeing them and for getting the reactions of major game devs on the release of Microsoft’s new DXR API. The performance impact of ray tracing will still be a hindrance to larger scale implementations, but with DXR driving the direction with a unified standard, I still expect to see some games with revolutionary image quality by the end of the year. 

Source: NVIDIA

HTC announces VIVE Pro Pricing, Available now for Preorder

Subject: General Tech, Graphics Cards | March 19, 2018 - 12:09 PM |
Tagged: vive pro, steamvr, rift, Oculus, Lighthouse, htc

Today, HTC has provided what VR enthusiasts have been eagerly waiting for since the announcement of the upgraded VIVE Pro headset earlier in the year at CESthe pricing and availability of the new device.

vivepro.png

Available for preorder today, the VIVE Pro will cost $799 for the headset-only upgrade. As we mentioned during the VIVE Pro announcement, this first upgrade kit is meant for existing VIVE users who will be reusing their original controllers and lighthouse trackers to get everything up and running.

The HMD-only kit, with it's upgraded resolution and optics, is set to start shipping very soon on April 3 and can be preordered now on the HTC website.

Additionally, your VIVE Pro purchase (through June 3rd, 2018) will come with a free six-month subscription to HTC's VIVEPORT subscription game service, which will gain you access to up to 5 titles per month for free (chosen from the VIVEPORT catalog of 400+ games.)

There is still no word on the pricing and availability of the full VIVE Pro kit including the updated Lighthouse 2.0 trackers, but it seems likely that it will come later in the Summer after the upgrade kit has saturated the market of current VIVE owners.

As far as system requirements go, the HTC site doesn't list any difference between the standard VIVE and the VIVE Pro. One change, however, is the lack of an HDMI port on the new VIVE Pro link box, so you'll need a graphics card with an open DisplayPort 1.2 connector. 

Source: HTC

Asus Introduces Gemini Lake-Powered J4005I-C Mini ITX Motheboard

Subject: Graphics Cards, Motherboards | March 8, 2018 - 02:55 AM |
Tagged: passive cooling, mini ITX, j4005i-c, Intel, gemini lake, fanless, asus

Asus is launching a new Mini ITX motherboard packing a passively-cooled Intel Celeron J4005 "Gemini Lake" SoC. The aptly-named Asus Prime J4005I-C is aimed at embedded systems such as point of sale machines, low end networked storage, kiosks, and industrial control and monitoring systems and features "5x Protection II" technology which includes extended validation and compatibility/QVL testing, overcurrent and overvoltage protection, network port surge protection, and ESD resistance. The board also features a EUFI BIOS with AI Suite.

Asus Prime J4005I-C Mini ITX Gemini Lake Motherboard.jpg

The motherboard features an Intel Celeron J4005 processor with two cores (2.0 GHz base and 2.7 GHz boost), 4MB cache, Intel UHD 600 graphics, and a 10W TDP. The SoC is passively cooled by a copper colored aluminum heatsink. The processor supports up to 8GB of 2400 MHz RAM and the motherboard has two DDR4 DIMM slots. Storage is handled by two SATA 6 Gbps ports and one M.2 slot (PCI-E x2) for SSDs. Further, the Prime J4005I-C has an E-key M.2 slot for WLAN and Bluetooth modules (PCI-E x2 or USB mode) along with headers for USB 2.0, USB 3.1 Gen 1, LVDS, and legacy LPT and COM ports.

Rear I/O includes two PS/2, two USB 2.0, one Gigabit Ethernet (Realtek RTL8111H), two USB 3.1 Gen 1, one HDMI, one D-SUB, one RS232, and three audio ports (Realtek ALC887-UD2).

The motherboard does not appear to be for sale yet in the US, but Fanless Tech notes that is is listed for around 80 euros overseas (~$100 USD). More Gemini Lake options are always good, and Asus now has one with PCI-E M.2 support though I see this board being more popular with commercial/industrial sectors than enthusiasts unless it goes on sale.

Source: Asus

AMD Project ReSX brings performance and latency improvements to select games

Subject: Graphics Cards | March 5, 2018 - 06:07 PM |
Tagged: amd, radeon, Adrenalin, resx

We all know that driver specific and per-game optimization happens for all major GPU vendors, including AMD and NVIDIA, but also Intel, and even mobile SoC vendors. Working with the game developers and tweaking your own driver is common practice to helping deliver the best possible gaming experience to your customers.

During the launch of the Radeon Vega graphics cards, AMD discussed with the media an initiative to lower the input latency for some key, highly sensitive titles. Those mostly focused around the likes of Counter-Strike: GO, DOTA 2, League of Legends, etc. They targeted very specific use cases, low-hanging fruit, which the engineering team had recognized could improve gameplay. This included better management of buffers and timing windows to decrease the time from input to display, but had a very specific selection of games and situations it could address.

And while AMD continues to tout its dedication to day-zero driver releases and having an optimized gaming experience for Radeon users on the day of release of a new major title, AMD apparently saw fit to focus a portion of its team on another specific project, this time addressing what it called “the best possible eSports experience.”

So Project ReSX was born (Radeon eSports Experience). Its goal was to optimize performance for some of the “most popular” PC games for Radeon GPUs. The efforts included both driver-level fixes, tweaks, and optimizations, as well as direct interaction with the game developer themselves. Depending on the level of involvement that the dev would accept, AMD would either help optimize the engine and game code itself locally or would send out AMD engineering talent to work with the developer on-site for some undisclosed period of time to help address performance concerns.

Driver release 18.3.1 which is posted on AMD’s website right now, integrates these fixes that the company says are available immediately with some titles and will be “rolling into games in the coming weeks.”

Results that AMD has shared look moderately impressive.

amd_resx_table.jpg

In PUBG, for example, AMD is seeing an 11% improvement in average frame rate and a 9% improvement in the 99th percentile frame time, an indicator of smoothness. Overwatch and DOTA2 are included as well though the numbers are bit lower at 3% and 6%, respectively, in terms of average frame rate. AMD claims that the “click to response” measurement (using high speed cameras for testing) was as much as 8% faster in DOTA 2.

This is great news for Radeon owners, and not just RX 580 customers. AMD’s Scott Wasson told me that if anything, the gaps may widen with the Radeon Vega lineup but that AMD wanted to focus on where the graphics card lineup struggled more with this level of game. PLAYERUNKNOWN’S BATTLEGROUNDS is known to be a highly unoptimized game, and seeing work from AMD on the driver and at the developer relations level is fantastic.

However, there are a couple of other things to keep in mind. These increases in performance are in comparison to the 17.12.1 release, which was the first Adrenalin launch driver in December of last year. There have been several drivers released between now and today, so we have likely seen SOME of this increase along the way.

Also, while this initiative and project are the right track for AMD to be on, the company isn’t committing to any future releases along these veins. To me, giving this release and direction some kind of marketing name and calling it a “project” indicates that there is or will be continued work on this front: key optimizations and developer work for very popular titles even after the initial launch window. All I was told today was that “there may be” more coming down the pipeline but they had nothing to announce at this time. Hmph.

Also note that NVIDIA hasn’t been sitting idle during this time. In fact, the last email I received from NVIDIA’s driver team indicates that it offers “performance improvements in PlayerUnknown’s Battlegrounds (PUBG), which exhibits performance improvements up to 7% percent” with driver 391.01. In fact, the website lists a specific table with performance uplifts:

nvtable.png

While I am very happy to see AMD keeping its continued software promise for further development and optimization for current customers going strong, it simply HAS TO if it wants to keep pace with the efforts of the competition.

All that being said – if you have a Radeon graphics card and plan on joining us to parachute in for some PUBG matches tonight, go grab the new driver immediately!

Source: Radeon