Micron Planning To Launch GDDR6 Graphics Memory In 2017

Subject: Graphics Cards | February 4, 2017 - 08:29 PM |
Tagged: micron, graphics memory, gddr6

This year is shaping up to be a good year for memory with the promise of 3D XPoint (Intel/Micron), HBM2 (SK Hynix and Samsung), and now GDDR6 graphics memory from Micron launching this year. While GDDR6 was originally planned to be launched next year, Micron recently announced its intentions to start producing the memory chips by the later half of 2017 which would put it much earlier than previously expected.

Micron Logo.png

Computer World reports that Micron is citing the rise of e-sports and gaming driving the computer market that now sees three year upgrade cycles rather than five year cycles (I am not sure how accurate that is, however as it seems like PCs are actually lasting longer between upgrade as far as relevance but i digress) as the primary reason for shifting GDDR6 production into high gear and moving up the launch window. The company expects the e-sports market to grow to 500 million fans by 2020, and it is a growing market that Micron wants to stay relevant in.

If you missed our previous coverage, GDDR6 is the successor to GDDR5 and offers twice the bandwidth at 16 Gb/s (gigabits per second) per die. It is also faster than GDDR5X (12 Gb/s) and uses 20% less power which the gaming laptop market will appreciate. HBM2 still holds the bandwidth crown though as it offers 256 GB/s per stack and up to 1TB/s with four stacks connected to a GPU on package.

As such, High Bandwidth Memory (HBM2 and then HBM3) will power the high end gaming and professional graphics cards while GDDR6 will become the memory used for mid range cards and GDDR5X (which is actually capable of going faster but will likely not be pushed much past 12 Gbps after all if GDDR6 does come out this soon) will replace GDDR5 on most if not all of the lower end products.

I am not sure if Micron’s reasoning of e-sports, faster upgrade cycles, and VR being the motivating factor(s) to ramping up production early is sound or not, but I will certainly take the faster memory coming out sooner rather than later! Depending on exactly when in 2017 the chips start rolling off the fabs, we could see graphics cards using the new memory technology as soon as early 2018 (just in time for CES announcements? oh boy I can see the PR flooding in already! hehe).

Will Samsung change course as well and try for a 2017 release for its GDDR6 memory as well?

Are you ready for GDDR6?

NVIDIA Releases Vulkan Developer 376.80 Beta Drivers

Subject: Graphics Cards | February 3, 2017 - 10:58 PM |
Tagged: nvidia, graphics drivers, vulkan

On February 1st, NVIDIA released a new developer beta driver, which fixes a couple of issues with their Vulkan API implementation. Unlike what some sites have been reporting, you should not download it to play games that use the Vulkan API, like DOOM. In short, it is designed for developers, not end-users. The goal is to provide correct results when software interacts with the driver, not the best gaming performance or anything like that.

khronos-2016-vulkanlogo2.png

In a little more detail, it looks like 376.80 implements the Vulkan 1.0.39.1 SDK. This update addresses two issues with accessing devices and extensions, under certain conditions, when using the 1.0.39.0 SDK. 1.0.39.0 was released on January 23rd, and thus it will not even be a part of current video games. Even worse, it, like most graphics drivers for software developers, is based on the old, GeForce 376 branch, so it won’t even have NVIDIA’s most recent fixes and optimizations. NVIDIA does this so they can add or change the features that Vulkan developers require without needing to roll-in patches every time they make a "Game Ready" optimization or something. There is no reason to use this driver unless you are developing Vulkan applications, and you want to try out the new extensions. It will eventually make it to end users... when it's time.

If you are wishing to develop software using Vulkan’s bleeding-edge features, then check out NVIDIA’s developer portal to pick up the latest drivers. Basically everyone else should use 378.49 or its 378.57 hotfix.

Source: NVIDIA

Radeon Software Crimson ReLive 17.1.2 Drivers Released

Subject: Graphics Cards | February 2, 2017 - 12:02 PM |
Tagged: graphics drivers, amd

A few days ago, AMD released their second graphics drivers of January 2017: Radeon Software Crimson ReLive 17.1.2. The main goal of these drivers are to support the early access of Conan Exiles as well as tomorrow’s closed beta for Tom Clancy’s Ghost Recon Wildlands. Optimization that AMD has been working on prior to release, for either game, are targeted at this version.

amd-2016-crimson-relive-logo.png

Beyond game-specific optimizations, a handful of bugs are also fixed, ranging from crashes to rendering artifacts. There was also an issue with configuring WattMan on a system that has multiple monitors, where the memory clock would drop or bounce around. There is driver also has a bunch of known issues, including a couple of hangs and crashes under certain situations.

Radeon Software Crimson ReLive Edition 17.1.2 is available at AMD’s website.

Source: AMD

NVIDIA Release GeForce 378.57 Hotfix Drivers

Subject: Graphics Cards | February 2, 2017 - 12:01 PM |
Tagged: nvidia, graphics drivers

If you were having issues with Minecraft on NVIDIA’s recent 378.49 drivers, then you probably want to try out their latest hotfix. This version, numbered 378.57, will not be pushed down GeForce Experience, so you will need to grab them from NVIDIA’s customer support page.

nvidia-2015-bandaid.png

Beyond Minecraft, this also fixes an issue with “debug mode”. For some Pascal-based graphics cards, the option in NVIDIA Control Panel > Help > Debug Mode might be on by default. This option will reduce factory-overclocked GPUs down to NVIDIA’s reference speeds, which is useful to eliminate stability issues in testing, but pointlessly slow if you’re already stable. I mean, you bought the factory overclock, right? I’m guessing someone at NVIDIA used it to test 378.49 during its development, fixed an issue, and accidentally commit the config file with the rest of the fix. Either way, someone caught it, and it’s now fixed, even though you should be able to just untick it if you have a factory-overclocked GPU.

Source: NVIDIA

Win our RX 460 Budget Gaming System!!

Subject: Graphics Cards | January 31, 2017 - 04:18 PM |
Tagged: rx 460, radeon, giveaway, contest, buildapc, amd

As part of our partnership with AMD to take a look at the Radeon RX 460 as a budget gaming graphics solution, we are giving away the computer we built for our testing. If you missed our previous stories, shame on you. Check them out here:

Check out the embeded block below to see how you can win our system. It is a global giveaway, so feel free to enter no matter where you live! Thanks again to AMD for providing the hardware for this build!

Radeon RX 460 Budget System Giveaway (sponsored by AMD)

Source: AMD

DirectX Intermediate Language Announced... via GitHub

Subject: Graphics Cards | January 28, 2017 - 02:19 AM |
Tagged: microsoft, DirectX, llvm, dxil, spir-v, vulkan

Over the holidays, Microsoft has published the DirectX Shader Compiler onto GitHub. The interesting part about this is that it outputs HLSL into DirectX Intermediate Language (DXIL) bytecode, which can be ingested by GPU drivers and executed on graphics devices. The reason why this is interesting is that DXIL is based on LLVM, which might start to sound familiar if you have been following along with The Khronos Group and their announcements regarding Vulkan, OpenCL, and SPIR-V.

As it turns out, they were on to something, and Microsoft is working on a DirectX analogue of it.

microsoft-2015-directx12-logo.jpg

The main advantage of LLVM-based bytecode is that you can eventually support multiple languages (and the libraries of code developed in them). When SPIR-V was announced with Vulkan, the first thing that came to my mind was compiling to it from HLSL, which would be useful for existing engines, as they are typically written in HLSL and transpiled to the target platform when used outside of DirectX (like GLSL for OpenGL). So, in Microsoft’s case, it would make sense that they start there (since they own the thing) but I doubt that is the end goal. The most seductive outcome for game engine developers would be single-source C++, but there is a lot of steps between there and here.

Another advantage, albeit to a lesser extent, is that you might be able to benefit from performance optimizations, both on the LLVM / language side as well as on the driver’s side.

According to their readme, the minimum support will be HLSL Shader Model 6. This is the most recent shading model, and it introduces some interesting instructions, typically for GPGPU applications, that allow multiple GPU threads to interact, like balloting. Ironically, while DirectCompute and C++AMP don’t seem to be too popular, this would nudge DirectX 12 into a somewhat competent GPU compute API.

DXIL support is limited to Windows 10 Build 15007 and later, so you will need to either switch one (or more) workstation(s) to Insider, or wait until it launches with the Creators Update (unless something surprising holds it back).

NVIDIA Releases GeForce 378.49 Drivers

Subject: Graphics Cards | January 27, 2017 - 02:38 AM |
Tagged: nvidia, graphics drivers

Update: There are multiple issues being raised in our comments, including a Steam post by Sam Lantinga (Valve) about this driver breaking In-Home Streaming. Other complaints include certain applications crashing and hardware acceleration issues.

Original Post Below

Now that the holidays are over, we’re ready for the late-Winter rush of “AAA” video games. Three of them, Resident Evil VII, the early access of Conan Exiles, and the closed beta of For Honor, are targeted by NVIDIA’s GeForce 378.49 Game Ready drivers. Unless we get a non-Game Ready driver in the interim, I am guessing that this will cover us until mid-February, before the full release of For Honor, alongside Sniper Elite 4 and followed by Halo Wars 2 on the next week.

nvidia-geforce.png

Beyond game-specific updates, the 378-branch of drivers includes a bunch of SLI profiles, including Battlefield 1. It also paves the way for GTX 1050- and GTX 1050 Ti-based notebooks; this is their launch driver whenever OEMs begin to ship the laptops they announced at CES.

This release also contains a bunch of bug fixes (pdf), including a reboot bug with Wargames: Red Dragon and TDR (driver time-out) with Windows 10 Anniversary Update. I haven’t experienced any of these, but it’s good to be fixed regardless.

You can pick up the new drivers from their website if, you know, GeForce Experience hasn’t already notified you.

Source: NVIDIA

The Gigabyte GTX 1060 G6, a decent card with a bit of work

Subject: Graphics Cards | January 25, 2017 - 08:37 PM |
Tagged: windforce, factory overclocked, GTX 1060 G1 GAMING 6G, GeForce GTX 1060, gigabyte

In their testing [H]ard|OCP proved that the Windforce cooler is not the limiting factor when overclocking Gigabyte's GTX 1060 G1 Gaming G6, even at their top overclock of  2.1GHz GPU, 9.4GHz memory the temperature never reached 60C.  They did have some obstacles reaching those speeds, the cards onboard Gaming mode offered an anemic boost and in order to start manually overclocking this card you will need to install the XTREME ENGINE VGA Utility.  Once you have that, you can increase the voltage and clocks to find the limits of the card you have, which should offer a noticeable improvement from its performance straight out of the box.

1484519953uTUJTeH5LD_1_1.png

"We’ve got the brand new GIGABYTE GeForce GTX 1060 G1 GAMING 6G video card to put through the paces and find out how well it performs in games and overclocks. We will compare its highest overclock with one of the best overclocks we’ve achieved on AMD Radeon RX 480 to put it to the test. How will it stand up? Let’s find out."

Here are some more Graphics Card articles from around the web:

Graphics Cards

Source: [H]ard|OCP

Sapphire Releases AMD Radeon RX460 with 1024 Shaders

Subject: Graphics Cards | January 19, 2017 - 01:43 AM |
Tagged: video, unlock, shaders, shader cores, sapphire, radeon, Polaris, graphics, gpu, gaming, card, bios, amd, 1024

As reported by WCCFtech, AMD partner Sapphire has a new 1024 stream processor version of the RX460 listed on their site (Chinese language), and this product reveal of course comes after it became known that RX460 graphics cards had the potential to have their stream processor count unlocked from 896 to 1024 via BIOS update.

Sapphire_201712151752.jpg

Sapphire RX460 1024SP 4G D5 Ultra Platinum OC (image credit: Sapphire)

The Sapphire RX460 1024SP edition offers a full Polaris 11 core operating at 1250 MHz, and it otherwise matches the specifications of a stock RX460 graphics card. Whether this product will be available outside of China is unknown, as is the potential pricing model should it be available in the USA. A 4GB Radeon RX460 retails for $99, while the current step-up option is the RX470, which doubles up on this 1024SP RX460's shader count with 2048, with a price increase of about 70% ($169).

Radeon_Chart.PNG

AMD Polaris GCN 4.0 GPU lineup (Credit WCCFtech)

As you may note from the chart above, there is also an RX470D option between these cards that features 1792 shaders, though this option is also China-only.

Source: WCCFtech

Gigabyte Shows Off Half Height GTX 1050 and GTX 1050 Ti Graphics Cards

Subject: Graphics Cards | January 18, 2017 - 03:31 AM |
Tagged: SFF, pascal, low profile, GTX 1050 Ti, gtx 1050, gigabyte

Without much fanfare Gigabyte recently launched two new low profile half height graphics cards packing factory overclocked GTX 1050 and GTX 1050 Ti GPUs. The new cards measure 6.6” x 2.7” x 1.5” (167mm long) and are cooled by a small shrouded single fan cooler. 
 
Gigabyte GTX 1050 OC Low Profile 2G.png
 
Around back, both the Gigabyte GTX 1050 OC Low Profile 2G and GTX 1050 Ti OC Low Profile 4G offer four display outputs in the form of two HDMI 2.0b, one DisplayPort 1.4, and one dual-link DVI-D. It appears that Gigabyte is using the same cooler for both cards. There is not much information on this cooler, but it utilizes an aluminum heatsink and what looks like a ~50mm fan. Note that while the cards are half-height, they use a dual slot design which may limit the cases it can be used in.
 
The GTX 1050 OC Low Profile 2G features 640 Pascal-based CUDA cores clocked at 1366 MHz base and 1468 MHz boost out of the box (1392 MHz base and 1506 MHz boost in OC Mode using Gigabyte’s software) and 2GB of GDDR5 memory at 7008 MHz (7GT/s). For comparison, the GTX 1050 reference clock speeds are 1354 MHz base and 1455 MHz boost.
 
Meanwhile, the GTX 1050 Ti OC Low Profile 4G has 768 cores clocked at 1303 MHz base and 1417 MHz boost by default and 1328 MHz base and 1442 MHz boost in OC Mode. The GPU is paired with 4GB of GDDR5 memory at 7GT/s. NVIDIA’s reference GPU clocks are 1290 MHz base and 1392 MHz boost.
 
The pint-sized graphics cards would certainly allow for gaming on your SFF home theater or other desktop PC as well as being an easy upgrade to make a tiny OEM PC gaming capable (think those thin towers HP, Lenovo, and Dell like to use). 
 
Of course, Gigabyte is not yet talking pricing and availability has only been narrowed down to a general Q1 2017 time frame. I would expect the cards to hit retailers within a month or so and be somewhere around $135 for their half height GTX 1050 OC LP 2G and approximately $155 for the faster GTX 1050 Ti variant. That is to say that the low profile cards should be available at a slight premium over the company's larger GTX 1050 and GTX 1050 Ti graphics cards.
Source: Gigabyte