Manufacturer: Microsoft

It's all fun and games until something something AI.

Microsoft announced the Windows Machine Learning (WinML) API about two weeks ago, but they did so in a sort-of abstract context. This week, alongside the 2018 Game Developers Conference, they are grounding it in a practical application: video games!

microsoft-2018-winml-graphic.png

Specifically, the API provides the mechanisms for game developers to run inference on the target machine. The training data that it runs against would be in the Open Neural Network Exchange (ONNX) format from Microsoft, Facebook, and Amazon. Like the initial announcement suggests, it can be used for any application, not just games, but… you know. If you want to get a technology off the ground, and it requires a high-end GPU, then video game enthusiasts are good lead users. When run in a DirectX application, WinML kernels are queued on the DirectX 12 compute queue.

We’ve discussed the concept before. When you’re rendering a video game, simulating an accurate scenario isn’t your goal – the goal is to look like you are. The direct way of looking like you’re doing something is to do it. The problem is that some effects are too slow (or, sometimes, too complicated) to correctly simulate. In these cases, it might be viable to make a deep-learning AI hallucinate a convincing result, even though no actual simulation took place.

Fluid dynamics, global illumination, and up-scaling are three examples.

Previously mentioned SIGGRAPH demo of fluid simulation without fluid simulation...
... just a trained AI hallucinating a scene based on input parameters.

Another place where AI could be useful is… well… AI. One way of making AI is to give it some set of data from the game environment, often including information that a player in its position would not be able to know, and having it run against a branching logic tree. Deep learning, on the other hand, can train itself on billions of examples of good and bad play, and make results based on input parameters. While the two methods do not sound that different, the difference between logic being designed (vs logic being assembled from an abstract good/bad dataset) someone abstracts the potential for assumptions and programmer error. Of course, it abstracts that potential for error into the training dataset, but that’s a whole other discussion.

The third area that AI could be useful is when you’re creating the game itself.

There’s a lot of grunt and grind work when developing a video game. Licensing prefab solutions (or commissioning someone to do a one-off asset for you) helps ease this burden, but that gets expensive in terms of both time and money. If some of those assets could be created by giving parameters to a deep-learning AI, then those are assets that you would not need to make, allowing you to focus on other assets and how they all fit together.

These are three of the use cases that Microsoft is aiming WinML at.

nvidia-2018-deeplearningcarupscale.png

Sure, these are smooth curves of large details, but the antialiasing pattern looks almost perfect.

For instance, Microsoft is pointing to an NVIDIA demo where they up-sample a photo of a car, once with bilinear filtering and once with a machine learning algorithm (although not WinML-based). The bilinear algorithm behaves exactly as someone who has used Photoshop would expect. The machine learning algorithm, however, was able to identify the objects that the image intended to represent, and it drew the edges that it thought made sense.

microsoft-2018-gdc-PIX.png

Like their DirectX Raytracing (DXR) announcement, Microsoft plans to have PIX support WinML “on Day 1”. As for partners? They are currently working with Unity Technologies to provide WinML support in Unity’s ML-Agents plug-in. That’s all the game industry partners they have announced at the moment, though. It’ll be interesting to see who jumps in and who doesn’t over the next couple of years.

NVIDIA RTX Technology Accelerates Ray Tracing for Microsoft DirectX Raytracing API

Subject: Graphics Cards | March 19, 2018 - 01:00 PM |
Tagged: rtx, nvidia, dxr

The big news from the Game Developers Conference this week was Microsoft’s reveal of its work on a new ray tracing API for DirectX called DirectX Raytracing. As the name would imply, this is a new initiative to bring the image quality improvements of ray tracing to consumer hardware with the push of Microsoft’s DX team. Scott already has a great write up on that news and current and future implications of what it will mean for PC gamers, so I highly encourage you all to read that over before diving more into this NVIDIA-specific news.

Ray tracing has been the holy grail of real-time rendering. It is the gap between movies and games – though ray tracing continues to improve in performance it takes the power of offline server farms to render the images for your favorite flicks. Modern game engines continue to use rasterization, an efficient method for rendering graphics but one that depends on tricks and illusions to recreate the intended image. Ray tracing inherently solves the problems that rasterization works around including shadows, transparency, refraction, and reflection. But it does so at a prohibitive performance cost. But that will be changing with Microsoft’s enablement of ray tracing through a common API and technology like what NVIDIA has built to accelerate it.

04.jpg

Alongside support and verbal commitment to DXR, NVIDIA is announcing RTX Technology. This is a combination of hardware and software advances to improve the performance of ray tracing algorithms on its hardware and it works hand in hand with DXR. NVIDIA believes this is the culmination of 10 years of development on ray tracing, much of which we have talked about on this side from the world of professional graphics systems. Think Iray, OptiX, and more.

RTX will run on Volta GPUs only today, which does limit usefulness to gamers. With the only graphics card on the market even close to considered a gaming product the $3000 TITAN V, RTX is more of a forward-looking technology announcement for the company. We can obviously assume then that RTX technology will be integrated on any future consumer gaming graphics cards, be that a revision of Volta of something completely different. (NVIDIA refused to acknowledge plans for any pending Volta consumer GPUs during our meeting.)

The idea I get from NVIDIA is that today’s RTX is meant as a developer enablement platform, getting them used to the idea of adding ray tracing effects into their games and engines and to realize that NVIDIA provides the best hardware to get that done.

I’ll be honest with you – NVIDIA was light on the details of what RTX exactly IS and how it accelerates ray tracing. One very interesting example I was given was seen first with the AI-powered ray tracing optimizations for Optix from last year’s GDC. There, NVIDIA demonstrated that using the Volta Tensor cores it could run an AI-powered de-noiser on the ray traced image, effectively improving the quality of the resulting image and emulating much higher ray counts than are actually processed.

By using the Tensor cores with RTX for DXR implementation on the TITAN V, NVIDIA will be able to offer image quality and performance for ray tracing well ahead of even the TITAN Xp or GTX 1080 Ti as those GPUs do not have Tensor cores on-board. Does this mean that all (or flagship) consumer graphics cards from NVIDIA will includ Tensor cores to enable RTX performance? Obviously, NVIDIA wouldn’t confirm that but to me it makes sense that we will see that in future generations. The scale of Tensor core integration might change based on price points, but if NVIDIA and Microsoft truly believe in the future of ray tracing to augment and significantly replace rasterization methods, then it will be necessary.

Though that is one example of hardware specific features being used for RTX on NVIDIA hardware, it’s not the only one that is on Volta. But NVIDIA wouldn’t share more.

The relationship between Microsoft DirectX Raytracing and NVIDIA RTX is a bit confusing, but it’s easier to think of RTX as the underlying brand for the ability to ray trace on NVIDIA GPUs. The DXR API is still the interface between the game developer and the hardware, but RTX is what gives NVIDIA the advantage over AMD and its Radeon graphics cards, at least according to NVIDIA.

DXR will still run on other GPUS from NVIDIA that aren’t utilizing the Volta architecture. Microsoft says that any board that can support DX12 Compute will be able to run the new API. But NVIDIA did point out that in its mind, even with a high-end SKU like the GTX 1080 Ti, the ray tracing performance will limit the ability to integrate ray tracing features and enhancements in real-time game engines in the immediate timeframe. It’s not to say it is impossible, or that some engine devs might spend the time to build something unique, but it is interesting to hear NVIDIA infer that only future products will benefit from ray tracing in games.

It’s also likely that we are months if not a year or more from seeing good integration of DXR in games at retail. And it is also possible that NVIDIA is downplaying the importance of DXR performance today if it happens to be slower than the Vega 64 in the upcoming Futuremark benchmark release.

05.jpg

Alongside the RTX announcement comes GameWorks Ray Tracing, a colleciton of turnkey modules based on DXR. GameWorks has its own reputation, and we aren't going to get into that here, but NVIDIA wants to think of this addition to it as a way to "turbo charge enablement" of ray tracing effects in games.

NVIDIA believes that developers are incredibly excited for the implementation of ray tracing into game engines, and that the demos being shown at GDC this week will blow us away. I am looking forward to seeing them and for getting the reactions of major game devs on the release of Microsoft’s new DXR API. The performance impact of ray tracing will still be a hindrance to larger scale implementations, but with DXR driving the direction with a unified standard, I still expect to see some games with revolutionary image quality by the end of the year. 

Source: NVIDIA

HTC announces VIVE Pro Pricing, Available now for Preorder

Subject: General Tech, Graphics Cards | March 19, 2018 - 12:09 PM |
Tagged: vive pro, steamvr, rift, Oculus, Lighthouse, htc

Today, HTC has provided what VR enthusiasts have been eagerly waiting for since the announcement of the upgraded VIVE Pro headset earlier in the year at CESthe pricing and availability of the new device.

vivepro.png

Available for preorder today, the VIVE Pro will cost $799 for the headset-only upgrade. As we mentioned during the VIVE Pro announcement, this first upgrade kit is meant for existing VIVE users who will be reusing their original controllers and lighthouse trackers to get everything up and running.

The HMD-only kit, with it's upgraded resolution and optics, is set to start shipping very soon on April 3 and can be preordered now on the HTC website.

Additionally, your VIVE Pro purchase (through June 3rd, 2018) will come with a free six-month subscription to HTC's VIVEPORT subscription game service, which will gain you access to up to 5 titles per month for free (chosen from the VIVEPORT catalog of 400+ games.)

There is still no word on the pricing and availability of the full VIVE Pro kit including the updated Lighthouse 2.0 trackers, but it seems likely that it will come later in the Summer after the upgrade kit has saturated the market of current VIVE owners.

As far as system requirements go, the HTC site doesn't list any difference between the standard VIVE and the VIVE Pro. One change, however, is the lack of an HDMI port on the new VIVE Pro link box, so you'll need a graphics card with an open DisplayPort 1.2 connector. 

Source: HTC

Asus Introduces Gemini Lake-Powered J4005I-C Mini ITX Motheboard

Subject: Graphics Cards, Motherboards | March 8, 2018 - 02:55 AM |
Tagged: passive cooling, mini ITX, j4005i-c, Intel, gemini lake, fanless, asus

Asus is launching a new Mini ITX motherboard packing a passively-cooled Intel Celeron J4005 "Gemini Lake" SoC. The aptly-named Asus Prime J4005I-C is aimed at embedded systems such as point of sale machines, low end networked storage, kiosks, and industrial control and monitoring systems and features "5x Protection II" technology which includes extended validation and compatibility/QVL testing, overcurrent and overvoltage protection, network port surge protection, and ESD resistance. The board also features a EUFI BIOS with AI Suite.

Asus Prime J4005I-C Mini ITX Gemini Lake Motherboard.jpg

The motherboard features an Intel Celeron J4005 processor with two cores (2.0 GHz base and 2.7 GHz boost), 4MB cache, Intel UHD 600 graphics, and a 10W TDP. The SoC is passively cooled by a copper colored aluminum heatsink. The processor supports up to 8GB of 2400 MHz RAM and the motherboard has two DDR4 DIMM slots. Storage is handled by two SATA 6 Gbps ports and one M.2 slot (PCI-E x2) for SSDs. Further, the Prime J4005I-C has an E-key M.2 slot for WLAN and Bluetooth modules (PCI-E x2 or USB mode) along with headers for USB 2.0, USB 3.1 Gen 1, LVDS, and legacy LPT and COM ports.

Rear I/O includes two PS/2, two USB 2.0, one Gigabit Ethernet (Realtek RTL8111H), two USB 3.1 Gen 1, one HDMI, one D-SUB, one RS232, and three audio ports (Realtek ALC887-UD2).

The motherboard does not appear to be for sale yet in the US, but Fanless Tech notes that is is listed for around 80 euros overseas (~$100 USD). More Gemini Lake options are always good, and Asus now has one with PCI-E M.2 support though I see this board being more popular with commercial/industrial sectors than enthusiasts unless it goes on sale.

Source: Asus

AMD Project ReSX brings performance and latency improvements to select games

Subject: Graphics Cards | March 5, 2018 - 06:07 PM |
Tagged: amd, radeon, Adrenalin, resx

We all know that driver specific and per-game optimization happens for all major GPU vendors, including AMD and NVIDIA, but also Intel, and even mobile SoC vendors. Working with the game developers and tweaking your own driver is common practice to helping deliver the best possible gaming experience to your customers.

During the launch of the Radeon Vega graphics cards, AMD discussed with the media an initiative to lower the input latency for some key, highly sensitive titles. Those mostly focused around the likes of Counter-Strike: GO, DOTA 2, League of Legends, etc. They targeted very specific use cases, low-hanging fruit, which the engineering team had recognized could improve gameplay. This included better management of buffers and timing windows to decrease the time from input to display, but had a very specific selection of games and situations it could address.

And while AMD continues to tout its dedication to day-zero driver releases and having an optimized gaming experience for Radeon users on the day of release of a new major title, AMD apparently saw fit to focus a portion of its team on another specific project, this time addressing what it called “the best possible eSports experience.”

So Project ReSX was born (Radeon eSports Experience). Its goal was to optimize performance for some of the “most popular” PC games for Radeon GPUs. The efforts included both driver-level fixes, tweaks, and optimizations, as well as direct interaction with the game developer themselves. Depending on the level of involvement that the dev would accept, AMD would either help optimize the engine and game code itself locally or would send out AMD engineering talent to work with the developer on-site for some undisclosed period of time to help address performance concerns.

Driver release 18.3.1 which is posted on AMD’s website right now, integrates these fixes that the company says are available immediately with some titles and will be “rolling into games in the coming weeks.”

Results that AMD has shared look moderately impressive.

amd_resx_table.jpg

In PUBG, for example, AMD is seeing an 11% improvement in average frame rate and a 9% improvement in the 99th percentile frame time, an indicator of smoothness. Overwatch and DOTA2 are included as well though the numbers are bit lower at 3% and 6%, respectively, in terms of average frame rate. AMD claims that the “click to response” measurement (using high speed cameras for testing) was as much as 8% faster in DOTA 2.

This is great news for Radeon owners, and not just RX 580 customers. AMD’s Scott Wasson told me that if anything, the gaps may widen with the Radeon Vega lineup but that AMD wanted to focus on where the graphics card lineup struggled more with this level of game. PLAYERUNKNOWN’S BATTLEGROUNDS is known to be a highly unoptimized game, and seeing work from AMD on the driver and at the developer relations level is fantastic.

However, there are a couple of other things to keep in mind. These increases in performance are in comparison to the 17.12.1 release, which was the first Adrenalin launch driver in December of last year. There have been several drivers released between now and today, so we have likely seen SOME of this increase along the way.

Also, while this initiative and project are the right track for AMD to be on, the company isn’t committing to any future releases along these veins. To me, giving this release and direction some kind of marketing name and calling it a “project” indicates that there is or will be continued work on this front: key optimizations and developer work for very popular titles even after the initial launch window. All I was told today was that “there may be” more coming down the pipeline but they had nothing to announce at this time. Hmph.

Also note that NVIDIA hasn’t been sitting idle during this time. In fact, the last email I received from NVIDIA’s driver team indicates that it offers “performance improvements in PlayerUnknown’s Battlegrounds (PUBG), which exhibits performance improvements up to 7% percent” with driver 391.01. In fact, the website lists a specific table with performance uplifts:

nvtable.png

While I am very happy to see AMD keeping its continued software promise for further development and optimization for current customers going strong, it simply HAS TO if it wants to keep pace with the efforts of the competition.

All that being said – if you have a Radeon graphics card and plan on joining us to parachute in for some PUBG matches tonight, go grab the new driver immediately!

Source: Radeon

NVIDIA GeForce 391.05 Hotfix Driver Available

Subject: Graphics Cards | March 4, 2018 - 02:02 PM |
Tagged: nvidia, hotfix, graphics drivers

NVIDIA has published a hotfix driver, 391.05, for a few issues that didn’t make it into the recently released 391.01 WHQL version. Specifically, if you are experiencing any of the following issues, then you can go to the NVIDIA forums and follow the link to their associated CustHelp page:

  • NVIDIA Freestyle stopped working
  • Display corruption on Titan V
  • Support for Microsoft Surface Book notebooks

nvidia-2015-bandaid.png

While improved support for the Titan V and the Microsoft Surface Book is very important for anyone who owns those devices, NVIDIA Freestyle is an interesting one for the masses. NVIDIA allows users to hook the post processing stage of various supported games and inject their own effects. The feature launched in January and it is still in beta, but lead users still want it to work of course. If you were playing around with this feature and it stopped working on 390-based drivers, then check out this hotfix.

For the rest of us? Probably a good idea to stay on the official drivers. Hotfixes have reduced QA, so it’s possible that other bugs were introduced in the process.

Source: NVIDIA

Bitmain could create headaches for NVIDIA, AMD, and Qualcomm

Subject: Graphics Cards | February 28, 2018 - 09:04 PM |
Tagged: bitmain, bitcoin, qualcomm, nvidia, amd

This article originally appeared in MarketWatch.

Research firm Bernstein recently published a report on the profitability of Bitmain Technologies, a secretive Chinese company with a huge impact on the bitcoin and cryptocurrency markets.

With estimated 2017 profits ranging from $3 billion to $4 billion, the size and scope of Beijing-based Bitmain is undeniable, with annual net income higher than some major tech players, including Nvidia and AMD. The privately held company, founded five years ago, has expanded its reach into many bitcoin-based markets, but most of its income stems from the development and sale of dedicated cryptocurrency mining hardware.

There is a concern that the sudden introduction of additional companies in the chip-production landscape could alter how other players operate. This includes the ability for Nvidia, AMD, Qualcomm and others to order chip production from popular semiconductor vendors at the necessary prices to remain competitive in their respective markets.

Bitmain makes most of its income through the development of dedicated chips used to mine bitcoin. These ASICs (application-specific integrated circuits) offer better performance and power efficiency than other products such as graphics chips from Nvidia and AMD. The Bitmain chips are then combined into systems called “miners” that can include as many as 250 chips in a single unit. Those are sold to large mining companies or individuals hoping to turn a profit from the speculative cryptocurrency markets for prices ranging from a few hundred to a few thousand dollars apiece.

Bitcoin mining giant

Bernstein estimates that as much as 70%-80% of the dedicated market for bitcoin mining is being addressed by Bitmain and its ASIC sales.

Bitmain has secondary income sources, including running mining pools (where groups of bitcoin miners share the workload of computing in order to turn a profit sooner) and cloud-based mining services where customers can simply rent mining hardware that exists in a dedicated server location. This enables people to attempt to profit from mining without the expense of buying hardware directly.

bitmain.JPG

A Bitmain Antminer

The chip developer and mining hardware giant has key advantages for revenue growth and stability, despite the volatility of the cryptocurrency market. When Bitmain designs a new ASIC that can address a new currency or algorithm, or run a current coin algorithm faster than was previously possible, it can choose to build its Antminers (the brand for these units) and operate them at its own server farms, squeezing the profitability and advantage the faster chips offer on the bitcoin market before anyone else in the ecosystem has access to them.

As the difficulty of mining increases (which occurs as higher-performance mining options are released, lowering the profitability of older hardware), Bitmain can then start selling the new chips and associated Antminers to customers, moving revenue from mining directly to sales of mining hardware.

This pattern can be repeated for as long as chip development continues, giving Bitmain a tremendous amount of flexibility to balance revenue from different streams.

Imagine a situation where one of the major graphics chip vendors exclusively used its latest graphics chips for its own services like cloud-compute, crypto-mining and server-based rendering and how much more valuable those resources would be — that is the power that Bitmain holds over the bitcoin market.

Competing for foundry business

Clearly Bitmain is big business, and its impact goes well beyond just the bitcoin space. Because its dominance for miners depends on new hardware designs and chip production, where performance and power efficiency are critical to building profitable hardware, it competes for the same foundry business as other fabless semiconductor giants. That includes Apple, Nvidia, Qualcomm, AMD and others.

Companies that build ASICs as part of their business model, including Samsung, TSMC, GlobalFoundries and even Intel to a small degree, look for customers willing to bid the most for the limited availability of production inventory. Bitmain is not restricted to a customer base that is cost-sensitive — instead, its customers are profit-sensitive. As long as the crypto market remains profitable, Bitmain can absorb the added cost of chip production.

Advantages over Nvidia, AMD and Qualcomm

Nvidia, AMD and Qualcomm are not as flexible. Despite the fact that Nvidia can charge thousands for some of its most powerful graphics chips when targeting the enterprise and machine-learning market, the wider gaming market is more sensitive to price changes. You can see that in the unrest that has existed in the gaming space as the price of graphics cards rises due to inventory going to miners rather than gamers. Neither AMD nor Nvidia will get away with selling graphic cards to partners for higher prices and, as a result, there is a potential for negative market growth in PC gaming.

If Bitmain uses the same foundry as others, and is willing to pay more for it to build their chips at a higher priority than other fabless semiconductor companies, then it could directly affect the availability and pricing for graphics chips, mobile phone processors and anything else built at those facilities. As a result, not only does the cryptocurrency market have an effect on the current graphics chip market for gamers by causing shortages, but it could also impact future chip availability if Bitmain (and its competitors) are willing to spend more for the advanced process technologies coming in 2018 and beyond.

Still, nothing is certain in the world of bitcoin and cryptocurrency. The fickle and volatile market means the profitability of Bitmain’s Antminers could be reduced, lessening the drive to pay more for chips and production. There is clearly an impact from sudden bitcoin value drops (from $20,000 to $6,000 as we see saw this month) on mining hardware sales, both graphics chip-based and ASIC-based, but measuring that and predicting it is a difficult venture.

Source: MarketWatch
Author:
Manufacturer: AMD

Overview

It's clear by now that AMD's latest CPU releases, the Ryzen 3 2200G and the Ryzen 5 2400G are compelling products. We've already taken a look at them in our initial review, as well as investigated how memory speed affected the graphics performance of the internal GPU but it seemed there was something missing.

Recently, it's been painfully clear that GPUs excel at more than just graphics rendering. With the rise of cryptocurrency mining, OpenCL and CUDA performance are as important as ever.

Cryptocurrency mining certainly isn't the only application where having a powerful GPU can help system performance. We set out to see how much of an advantage the Radeon Vega 11 graphics in the Ryzen 5 2400G provided over the significantly less powerful UHD 630 graphics in the Intel i5-8400.

DSC04637.JPG

Test System Setup
CPU AMD Ryzen 5 2400G
Intel Core i5-8400
Motherboard Gigabyte AB350N-Gaming WiFi
ASUS STRIX Z370-E Gaming
Memory 2 x 8GB G.SKILL FlareX DDR4-3200
(All memory running at 3200 MHz)
Storage Corsair Neutron XTi 480 SSD
Sound Card On-board
Graphics Card AMD Radeon Vega 11 Graphics
Intel UHD 630 Graphics
Graphics Drivers AMD 17.40.3701
Intel 23.20.16.4901
Power Supply Corsair RM1000x
Operating System Windows 10 Pro x64 RS3

 

GPGPU Compute

Before we take a look at some real-world examples of where a powerful GPU can be utilized, let's look at the relative power of the Vega 11 graphics on the Ryzen 5 2400G compared to the UHD 630 graphics on the Intel i5-8400.

sisoft-screen.png

SiSoft Sandra is a suite of benchmarks covering a wide array of system hardware and functionality, including an extensive range of GPGPU tests, which we are looking at today. 

sandra1.png

Comparing the raw shader performance of the Ryzen 5 2400G and the Intel i5-8400 provides a clear snapshot of what we are dealing with. In every precision category, the Vega 11 graphics in the AMD part are significantly more powerful than the Intel UHD 630 graphics. This all combines to provide a 175% increase in aggregate shader performance over Intel for the AMD part. 

Now that we've taken a look at the theoretical power of these GPUs, let's see how they perform in real-world applications.

Continue reading our look at the GPU compute performance of the Ryzen 5 2400G!

Author:
Manufacturer: AMD

Memory Matters

Memory speed is not a factor that the average gamer thinks about when building their PC. For the most part, memory performance hasn't had much of an effect on modern processors running high-speed memory such as DDR3 and DDR4.

With the launch of AMD's Ryzen processors, last year emerged a platform that was more sensitive to memory speeds. By running Ryzen processors with higher frequency and lower latency memory, users should see significant performance improvements, especially in 1080p gaming scenarios.

However, the Ryzen processors are not the only ones to exhibit this behavior.

Gaming on integrated GPUs is a perfect example of a memory starved situation. Take for instance the new AMD Ryzen 5 2400G and it's Vega-based GPU cores. In a full Vega 56 or 64 situation, these Vega cores utilize blazingly fast HBM 2.0 memory. However, due to constraints such as die space and cost, this processor does not integrate HBM.

DSC04643.JPG

Instead, both the CPU portion and the graphics portion of the APU must both depend on the same pool of DDR4 system memory. DDR4 is significantly slower than memory traditionally found on graphics cards such as GDDR5 or HBM. As a result, APU performance is usually memory limited to some extent.

In the past, we've done memory speed testing with AMD's older APUs, however with the launch of the new Ryzen and Vega based R3 2200G and R5 2400G, we decided to take another look at this topic.

For our testing, we are running the Ryzen 5 2400G at three different memory speeds, 2400 MHz, 2933 MHz, and 3200 MHz. While the maximum supported JEDEC memory standard for the R5 2400G is 2933, the memory provided by AMD for our processor review will support overclocking to 3200MHz just fine.

Continue reading our look at memory speed scaling with the Ryzen 5 2400G!

NVIDIA Job Posting for Metal and OpenGL Engineer

Subject: Graphics Cards | February 18, 2018 - 02:54 PM |
Tagged: opengl, nvidia, metal, macos, apple

Just two days ago, NVIDIA has published a job posting for a software engineer to “implement and extend 3D graphics and Metal”. Given that they specify the Metal API, and they want applicants who are “Experienced with OSX and/or Linux operating systems”, it seems clear that this job would involve macOS and/or iOS.

First, if this appeals to any of our readers, the job posting is here.

Apple-logo.png

Second, and this is where it gets potentially news-worthy, is that NVIDIA hasn’t really done a whole lot on Apple platforms for a while. The most recent NVIDIA GPU to see macOS is the GeForce GTX 680. It’s entirely possible that NVIDIA needs someone to fill in and maintain those old components. If that’s the case? Business as usual. Nothing to see here.

The other possibility is that NVIDIA might be expecting a design win with Apple. What? Who knows. It could be something as simple as Apple’s external GPU architecture allowing the user to select their own add-in board. Alternatively, Apple could have selected an NVIDIA GPU for one or more product lines, which they have not done since 2013 (as far as I can tell).

Apple typically makes big announcements at WWDC, which is expected in early June, or around the back-to-school season in September. I’m guessing we’ll know by then at the latest if something is in the works.

Source: NVIDIA