GDDR5X Memory Standard Gets Official with JEDEC

Subject: Graphics Cards, Memory | January 22, 2016 - 04:08 PM |
Tagged: Polaris, pascal, nvidia, jedec, gddr5x, GDDR5, amd

Though information about the technology has been making rounds over the last several weeks, GDDR5X technology finally gets official with an announcement from JEDEC this morning. The JEDEC Solid State Foundation is, as Wikipedia tells us, an "independent semiconductor engineering trade organization and standardization body" that is responsible for creating memory standards. Getting the official nod from the org means we are likely to see implementations of GDDR5X in the near future.

The press release is short and sweet. Take a look.

ARLINGTON, Va., USA – JANUARY 21, 2016 –JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of JESD232 Graphics Double Data Rate (GDDR5X) SGRAM.  Available for free download from the JEDEC website, the new memory standard is designed to satisfy the increasing need for more memory bandwidth in graphics, gaming, compute, and networking applications.

Derived from the widely adopted GDDR5 SGRAM JEDEC standard, GDDR5X specifies key elements related to the design and operability of memory chips for applications requiring very high memory bandwidth.  With the intent to address the needs of high-performance applications demanding ever higher data rates, GDDR5X  is targeting data rates of 10 to 14 Gb/s, a 2X increase over GDDR5.  In order to allow a smooth transition from GDDR5, GDDR5X utilizes the same, proven pseudo open drain (POD) signaling as GDDR5.

“GDDR5X represents a significant leap forward for high end GPU design,” said Mian Quddus, JEDEC Board of Directors Chairman.  “Its performance improvements over the prior standard will help enable the next generation of graphics and other high-performance applications.”

JEDEC claims that by using the same signaling type as GDDR5 but it is able to double the per-pin data rate to 10-14 Gb/s. In fact, based on leaked slides about GDDR5X from October, JEDEC actually calls GDDR5X an extension to GDDR5, not a new standard. How does GDDR5X reach these new speeds? By doubling the prefech from 32 bytes to 64 bytes. This will require a redesign of the memory controller for any processor that wants to integrate it. 

gddr5x.jpg

Image source: VR-Zone.com

As for usable bandwidth, though information isn't quoted directly, it would likely see a much lower increase than we are seeing in the per-pin statements from the press release. Because the memory bus width would remain unchanged, and GDDR5X just grabs twice the chunk sizes in prefetch, we should expect an incremental change. No mention of power efficiency is mentioned either and that was one of the driving factors in the development of HBM.

07-bwperwatt.jpg

Performance efficiency graph from AMD's HBM presentation

I am excited about any improvement in memory technology that will increase GPU performance, but I can tell you that from my conversations with both AMD and NVIDIA, no one appears to be jumping at the chance to integrate GDDR5X into upcoming graphics cards. That doesn't mean it won't happen with some version of Polaris or Pascal, but it seems that there may be concerns other than bandwidth that keep it from taking hold. 

Source: JEDEC

Phoronix Tests Almost a Decade of GPUs

Subject: Graphics Cards | January 20, 2016 - 08:26 PM |
Tagged: nvidia, linux, tesla, fermi, kepler, maxwell

It's nice to see long-term roundups every once in a while. They do not really provide useful information for someone looking to make a purchase, but they show how our industry is changing (or not). In this case, Phoronix tested twenty-seven NVIDIA GeForce cards across four architectures: Tesla, Fermi, Kepler, and Maxwell. In other words, from the GeForce 8 series all the way up to the GTX 980 Ti.

phoronix-2016-many-nvidia-roundup.jpg

Image Credit: Phoronix

Nine years of advancements in ASIC design, with a doubling time-step of 18 months, should yield a 64-fold improvement. The number of transistors falls short, showing about a 12-fold improvement between the Titan X and the largest first-wave Tesla, although that means nothing for a fabless semiconductor designer. The main reason why I include this figure is to show the actual Moore's Law trend over this time span, but it also highlights the slowdown in process technology.

Performance per watt does depend on NVIDIA though, and the ratio between the GTX 980 Ti and the 8500 GT is about 72:1. While this is slightly better than the target 64:1 ratio, these parts are from very different locations in their respective product stacks. Swapping the 8500 GT for the following year's 9800 GTX, which leads to a comparison between top-of-the-line GPUs of their respective times, and you see a 6.2x improvement in performance per watt versus the GTX 980 Ti. On the other hand, that part was outstanding for its era.

I should note that each of these tests take place on Linux. It might not perfectly reflect the landscape on Windows, but again, it's interesting in its own right.

Source: Phoronix

TSMC Allegedly Wants 5nm by 2020

Subject: Graphics Cards, Processors | January 20, 2016 - 04:38 AM |
Tagged: TSMC

Digitimes is reporting on statements that were allegedly made by TSMC co-CEO, Mark Liu. We are currently seeing 16nm parts come out of the foundry, which is expected to be used in the next generation of GPUs, replacing the long-running 28nm node that launched with the GeForce GTX 680. (It's still unannounced whether AMD and NVIDIA will use 14nm FinFET from Samsung or GlobalFoundries, or 16nm FinFET from TSMC.)

Update (Jan 20th, @4pm EST): Couple minor corrections. Radeon HD 7970 launched at 28nm first by a couple of months. I just remember NVIDIA getting swamped in delays because it was a new node, so that's probably why I thought of the GTX 680. Also, AMD announced during CES that they will use GlobalFoundries to fab their upcoming GPUs, which I apparently missed. We suspect that NVIDIA will use TSMC, and have assumed that for a while, but it hasn't been officially announced yet (if ever).

tsmc.jpg

According to their projections, which (again) are filtered through Digitimes, the foundry expects to have 7nm in the first half of 2018. They also expect to introduce extreme ultraviolet (EUV) lithography methods with 5nm in 2020. Given that Silicon in a solid has a lattice spacing of ~0.54nm at room temperature, 7nm transistors will consist of about 13 atoms, and 5nm transistors will have features containing about 9 atoms.

We continue the march toward the end of silicon lithography.

Even if the statement is correct, much can happen between then and now. It wouldn't be the first time that I've seen a major foundry believe that a node would be available, but end up having it delayed. I wouldn't hold my breath, but I might cross my fingers if my hands were free.

At the very least, we can assume that TSMC's roadmap is 16nm, 10nm, 7nm, and then 5nm.

Source: Digitimes

Samsung Mass Produces HBM2 Memory

Subject: Graphics Cards, Memory | January 20, 2016 - 04:01 AM |
Tagged: Samsung, HBM2, hbm

Samsung has just announced that they have begun mass production of 4GB HBM2 memory modules. When used on GPUs, four packages can provide 16GB of Video RAM with very high performance. They do this with a very wide data bus, which trade off frequency for transferring huge chunks. Samsung's offering is rated at 256 GB/s per package, which is twice what the Fury X could do with HBM1.

samsung-2016-4GB-HBM2-DRAM-structure_main.jpg

They also expect to mass produce 8GB HBM2 packages within this calendar year. I'm guessing that this means we'll see 32GB GPUs in the late-2016 or early-2017 time frame unless "within this year" means very, very soon (versus Q3/Q4). They will likely be for workstation or professional cards, but, in NVIDIA's case, those are usually based on architectures that are marketed to high-end gaming enthusiasts through some Titan offering. There's a lot of ways this could go, but a 32GB Titan seems like a bit much; I wouldn't expect that this affects the enthusiast gamer segment. It might mean that professionals looking to upgrade from the Kepler-based Tesla K-series might be waiting a little longer, maybe even GTC 2017. Alternatively, they might get new cards, just with a 16GB maximum until a refresh next year. There's not enough information to know one way or the other, but it's something to think about when more of it starts rolling in.

Samsung's HBM2 are compatible with ECC, although I believe that was also true for at least some HBM1 modules from SK Hynix.

Source: Samsung

Report: NVIDIA Preparing GeForce GTX 980MX and 970MX Mobile GPUs

Subject: Graphics Cards | January 19, 2016 - 03:31 PM |
Tagged: rumor, report, nvidia, GTX 980MX, GTX 980M, GTX 970MX, GTX 970M, geforce

NVIDIA is reportedly preparing faster mobile GPUs based on Maxwell, with a GTX 980MX and 970MX on the way.

img-01.jpg

The new GTX 980MX would sit between the GTX 980M and the laptop version of the full GTX 980, with 1664 CUDA cores (compared to 1536 with the 980M), 104 Texture Units (up from the 980M's 96), a 1048 MHz core clock, and up to 8 GB of GDDR5. Memory speed and bandwidth will reportedly be identical to the GTX 980M at 5000 MHz and 160 GB/s respectively, with both GPUs using a 256-bit memory bus.

The GTX 970MX represents a similar upgrade over the existing GTX 970M, with CUDA Core count increased from 1280 to 1408, Texture Units up from 80 to 88, and 8 additional raster devices available (56 vs. 48). Both the 970M and 970MX use 192-bit GDDR5 clocked at 5000 MHz, and available with the same 3 GB or 6 GB of frame buffer.

WCCFtech prepared a chart to demonstrate the differences between NVIDIA's mobile offerings:

Model GeForce GTX 980 Laptop Version GeForce GTX 980MX

GeForce GTX 980M

GeForce GTX 970MX GeForce GTX 970M GeForce GTX 965M

GeForce GTX 960M

Architecture

Maxwell Maxwell Maxwell Maxwell Maxwell Maxwell Maxwell
GPU GM204 GM204 GM204 GM204 GM204 GM204 GM107
CUDA Cores 2048 1664 1536 1408 1280 1024 640
Texture Units 128 104 96 88 80 64 40
Raster Devices 64 64 64 56 48 32 16
Clock Speed 1218 MHz 1048 MHz 1038 MHz 941 MHz 924 MHz 950 MHz 1097 MHz
Memory Bus 256-bit 256-bit 256-bit 192-bit 192-bit 128-bit 128-bit
Frame Buffer 8 GB GDDR5 8/4 GB GDDR5 8/4 GB GDDR5 6/3 GB GDDR5 6/3 GB GDDR5 4 GB GDDR5 4 GB GDDR5
Memory Frequency 7008 MHz 5000 MHz 5000 MHz 5000 MHz 5000 MHz 5000 MHz 5000 MHz
Memory Bandwidth 224 GB/s 160 GB/s 160 GB/s 120 GB/s 120 GB/s 80 GB/s 80 GB/s
TDP ~150W 125W 125W 100W 100W 90W 75W

These new GPUs will reportedly be based on the same Maxwell GM204 core, and TDPs are apparently unchanged at 125W for the GTX 980MX, and 100W for the 970MX.

We will await any official announcement.

Source: WCCFtech

AMD High-End Polaris Expected for 2016

Subject: Graphics Cards | January 19, 2016 - 02:44 AM |
Tagged: Polaris, amd

When AMD announced their Polaris architecture at CES, it was focused on mid-range applications. Their example was an add-in board that could compete against an NVIDIA GeForce GTX 950, 1080p60 medium settings in Battlefront, but do so at 39% less wattage than this 28nm, Maxwell chip. These Polaris chips are planned for a “mid 2016” launch.

amd-2016-polaris-blocks.jpg

Raja Koduri, Chief Architect for the Radeon Technologies Group, spoke with VentureBeat at the show. In his conversation, he mentioned two architectures, Polaris 10 and Polaris 11, in the context of a question about their 2016 product generation. In the “high level” space, they are seeing “the most revolutionary jump in performance so far.” This doesn't explicitly state that the high-end Polaris video card will launch in 2016. That said, when combined with the November announcement, covered by us as “AMD Plans Two GPUs in 2016,” it further supports this interpretation.

We still don't know much about what the actual performance of this high-end GPU will be, though. AMD was able to push 8 TeraFLOPs of compute throughput by creating a giant 28nm die and converting the memory subsystem to HBM, which supposedly requires less die complexity than a GDDR5 memory controller (according to a conference call last year that preceded Fury X). The two-generation jump will give them more complexity to work with, but that could be partially offset by a smaller die because of the potential differences in yields (and so forth).

Also, while the performance of the 8 TeraFLOP Fury X was roughly equivalent to NVIDIA's 5.6 TeraFLOP GeForce GTX 980 Ti, we still don't know why. AMD has redesigned a lot of their IP blocks with Polaris; you would expect that, if something unexpected was bottlenecking Fury X, the graphics manufacturer wouldn't overlook it the next chance that they are able to tweak it. This could have been graphics processing or something much more mundane. Either way, upcoming benchmarks will be interesting.

And it seems like that may be this year.

Source: VentureBeat

Catalyst Crimson Edition 16.1 Hotfix Drivers Released

Subject: Graphics Cards | January 14, 2016 - 12:42 AM |
Tagged: graphics drivers, amd

AMD's recent “Hotfix” drivers don't seem to mean what NVIDIA's does. In the Green Team's case, they usually fix one or two issues that slipped past QA. While they likely won't break anything, they are probably a bad idea to install if you're not experiencing the listed problems. The changelog on AMD's drivers are significantly longer with a list of known issues that is roughly the same size.

amd-2015-crimson-logo.png

So should you install it? That depends. It's a little less cut-and-dry than NVIDIA's hotfixes, which are only useful for a handful of people. It sounds like the worst known issue is “Game stuttering may be experienced when running two Radeon R9 295X2 graphics cards in CrossFire mode” and “Display corruption may occur on multiple display systems when it has been running idle for some time.” The latter would affect me greatly, because I run four displays and basically never sleep or shutdown (except for updates). On the other hand, it fixes a variety of crash, hang, and flicker issues.

Check it out. If it sounds good, then pick it up. Otherwise, wait for the next Beta or WHQL driver.

Source: AMD

GeForce Hotfix Driver 361.60 Released

Subject: Graphics Cards | January 13, 2016 - 01:11 AM |
Tagged: graphics drivers, graphics driver, nvidia

NVIDIA has been pushing for WHQL certification for their drivers, but sometimes issues slip through QA, both at Microsoft and their own, internal team(s). Sometimes these issues will be fixed in a future release, but sometimes they push out a “HotFix” driver immediately. This is often great for people who experience the problems, but they should not be installed otherwise.

nvidia-2015-bandaid.png

In this case, GeForce Hotfix driver 361.60 fixes two issues. One is listed as “install & clocking related issues,” which refers to the GPU memory clock. According to Manuel Guzman of NVIDIA, some games and software was not causing the driver to fully wake the memory clock to a high-performance state. The other issue is “Crashes in Photoshop & Illustrator,” which fixes blue screen issues in both software, and possibly other programs that use the GPU in similar ways. I've never seen GeForce Driver 361.43 cause a BSOD in Photoshop, but I am a few versions behind with CS5.5.

Download links are available at NVIDIA Support, but unaffected users should just wait for an official driver in case the patch causes other issues, due to its minimal QA.

Source: NVIDIA

Report: NVIDIA Pascal GP104 Discovered, May Not Use HBM

Subject: Graphics Cards | January 11, 2016 - 11:05 PM |
Tagged: rumor, report, pascal, nvidia, HBM2, hbm, GP104

A delivery of GPUs and related test equipment from Taiwan to Banglore has led to speculation about NVIDIA's upcoming GP104 Pascal GPU.

GP104_delivery.png

Image via Zauba.com

How much information can be gleaned from an import shipping manifest (linked here)? The data indicates a chip with a 37.5 x 37.5 mm package and 2152 pins, which is being attributed to the GP104 based on knowledge of “earlier, similar deliveries” (or possible inside information). This has prompted members of the 3dcenter.org forums (German language) to speculate on the use of GDDR5 or GDDR5X memory based on the likelihood of HBM being implemented on a die of this size. 

Of course, NVIDIA has stated that Pascal will implement 3D memory, and the upcoming GP100 will reportedly be on a 55 x 55 mm package using HBM2. Could this be a new, lower-cost part using the existing GDDR5 standard or the faster GDDR5X instead? VideoCardz and WCCFtech have posted stories based on the 3DCenter report, and to quote directly from the VideoCardz post on the subject:

"3DCenter has a theory that GP104 could actually not use HBM, but GDDR5(X) instead. This would rather be a very strange decision, but could NVIDIA possibly make smaller GPU (than GM204) and still accommodate 4 HBM modules? This theory is not taken from the thin air. The GP100 aka the Big Pascal, would supposedly come in 55x55mm BGA package. That’s 10mm more than GM200, which were probably required for additional HBM modules. Of course those numbers are for the whole package (with interposer), not just the GPU."

All of this is a lot to take from a shipping record that might not even be related to an NVIDIA product, but the report has made the rounds at this point so now we’ll just have to wait for new information.

Source: 3DCenter.org
Manufacturer: PC Perspective
Tagged: moores law, gpu, cpu

Are Computers Still Getting Faster?

It looks like CES is starting to wind down, which makes sense because it ended three days ago. Now that we're mostly caught up, I found a new video from The 8-Bit Guy. He doesn't really explain any old technologies in this one. Instead, he poses an open question about computer speed. He was able to have a functional computing experience on a ten-year-old Apple laptop, which made him wonder if the rate of computer advancement is slowing down.

I believe that he (and his guest hosts) made great points, but also missed a few important ones.

One of his main arguments is that software seems to have slowed down relative to hardware. I don't believe that is true, but I believe it's looking in the right area. PCs these days are more than capable of doing just about anything in terms of 2D user interface that we would want to, and do so with a lot of overhead for inefficient platforms and sub-optimal programming (relative to the 80's and 90's at the very least). The areas that require extra horsepower are usually doing large batches of many related tasks. GPUs are key in this area, and they are keeping up as fast as they can, despite some stagnation with fabrication processes and a difficulty (at least before HBM takes hold) in keeping up with memory bandwidth.

For the last five years to ten years or so, CPUs have been evolving toward efficiency as GPUs are being adopted for the tasks that need to scale up. I'm guessing that AMD, when they designed the Bulldozer architecture, hoped that GPUs would have been adopted much more aggressively, but even as graphics devices, they now have a huge effect on Web, UI, and media applications.

google-android-opengl-es-extensions.jpg

These are also tasks that can scale well between devices by lowering resolution (and so forth). The primary thing that a main CPU thread needs to do is figure out the system's state and keep the graphics card fed before the frame-train leaves the station. In my experience, that doesn't scale well (although you can sometimes reduce the amount of tracked objects for games and so forth). Moreover, it is easier to add GPU performance, compared to single-threaded CPU, because increasing frequency and single-threaded IPC should be more complicated than planning out more, duplicated blocks of shaders. These factors combine to give lower-end hardware a similar experience in the most noticeable areas.

So, up to this point, we discussed:

  • Software is often scaling in ways that are GPU (and RAM) limited.
  • CPUs are scaling down in power more than up in performance.
  • GPU-limited tasks can often be approximated with smaller workloads.
    • Software gets heavier, but it doesn't need to be "all the way up" (ex: resolution).
    • Some latencies are hard to notice anyway.

Back to the Original Question

This is where “Are computers still getting faster?” can be open to interpretation.

intel-devilscanyon-overview.JPG

Tasks are diverging from one class of processor into two, and both have separate industries, each with their own, multiple goals. As stated, CPUs are mostly progressing in power efficiency, which extends (an assumed to be) sufficient amount of performance downward to multiple types of devices. GPUs are definitely getting faster, but they can't do everything. At the same time, RAM is plentiful but its contribution to performance can be approximated with paging unused chunks to the hard disk or, more recently on Windows, compressing them in-place. Newer computers with extra RAM won't help as long as any single task only uses a manageable amount of it -- unless it's seen from a viewpoint that cares about multi-tasking.

In short, computers are still progressing, but the paths are now forked and winding.