SK Hynix Launches Its 8Gb GDDR6 Memory Running at 14 Gbps

Subject: Graphics Cards, Memory | January 24, 2018 - 11:04 PM |
Tagged: SK Hynix, graphics memory, gddr6, 8gb, 14Gbps

SK Hynix recently updated its product catalog and announced the availability of its eight gigabit (8 Gb) GDDR6 graphics memory. The new chips come in two SKUs and three speed grades with the H56C8H24MJR-S2C parts operating at 14 Gbps and 12 Gbps and the H56C8H24MJR-S0C operating at 12 Gbps (but at higher voltage than the -S2C SKU) and 10 Gbps. Voltages range from 1.25V for 10 Gbps and either 1.25V or 1.35V for 12 Gbps to 1.35V for 14 Gbps. Each 8 Gb GDDR6 memory chip holds 1 GB of memory and can provide up to 56 GB/s of per-chip bandwidth.

View Full Size

While SK Hynix has a long way to go before competing with Samsung’s 18 Gbps GDDR6, its new chips are significantly faster than even its latest GDDR5 chips with the company working on bringing 9 Gbps and 10 Gbps GDDR5 to market. As a point of comparison, its fastest 10 Gbps GDDR5 would have a per chip bandwidth of 40 GB/s versus its 14 Gbps GDDR6 at 56 GB/s. A theoretical 8GB graphics card with eight 8 Gb chips running at 10 Gbps on a 256-bit memory bus would have maximum bandwidth of 320 GB/s. Replacing the GDDR5 with 14 Gbps GDDR6 in the same eight chip 256-bit bus configuration, the graphics card would hit 448 GB/s of bandwidth. In the Samsung story I noted that the Titan XP runs 12 8 Gb GDDR5X memory chips at 11.4 Gbps on a 384-bit bus for bandwidth of 547 GB/s. Replacing the G5X with GDDR6 would ramp up the bandwidth to 672 GB/s if running the chips at 14 Gbps.

Theoretical Memory Bandwidth
Chip Pin Speed Per Chip Bandwidth 256-bit bus 384-bit bus 1024-bit (one package) 4096-bit (4 packages)
10 Gbps 40 GB/s 320 GB/s 480 GB/s    

12 Gbps

48 GB/s 384 GB/s 576 GB/s    
14 Gbps 56 GB/s 448 GB/s 672 GB/s    
16 Gbps 64 GB/s 512 GB/s 768 GB/s    
18 Gbps 72 GB/s 576 GB/s 864 GB/s    
HBM2 2 Gbps 256 GB/s     256 GB/s 1 TB/s

GDDR6 is still a far cry from High Bandwidth Memory levels of performance, but it is much cheaper and easier to produce. With SK Hynix ramping up production and Samsung besting the fastest 16 Gbps G5X, it is likely that the G5X stop-gap will be wholly replaced with GDDR6 and things like the upgraded 10 Gbps GDDR5 from SK Hynix will pick up the low end. As more competition enters the GDDR6 space, prices should continue to come down and adoption should ramp up for the new standard with the next generation GPUs, game consoles, network devices, ect. using GDDR6 for all but the highest tier prosumer and enterprise HPC markets.

Also read:


January 25, 2018 | 06:59 AM - Posted by Martin (not verified)

GDDR6 is still a far cry from High Bandwidth Memory levels of performance, but it is much cheaper and easier to produce.The table just above that statement counters it somewhat nicely. Unless someone does 512-bit bus, 4-package HBM2 does have more bandwidth but 1/2/3-package HBM2 2 Gbps configuration does have a suitable match in reasonable bus width GDDR6 configurations.

January 25, 2018 | 11:58 AM - Posted by ThatDemandIsSettingGPUsPricingCurrently (not verified)

And GDDR6's cost savings will not be passed on to consumers as the GPU's price currently is set by supply and demand pricing(Mining)! So the GPU may as well use the more expensive HBM2 as that cost is not so big relative to the overall demand pricing on GPU currently.

The only reason for GPU makers/AIB partenrs to use GDDR6 is going to be if there can not be any HBM2 sourced and makers have no choice in the matter. Either way GPU pricis will remain high so the consumer will not see any savings from GDDR6.

January 25, 2018 | 03:07 PM - Posted by pessimisticobserver (not verified)

gpu/graphic card oems receive 0 percent of current markups. It will be interesting to see if nvidea and amd choose to raise msrps this year. seeing as the market can currently bare 200 percent markups what's 80 to 120 usd added onto msrps.

January 25, 2018 | 04:41 PM - Posted by LargerCutOfTheActionForTheGPUMakers (not verified)

AMD and Nvidia need to raise their Wholesale prices according to demand and only provide short term purchase contracts for their discrete GPU dies. The retailers are getting most of the fruits of AMD's and Nvidia's hard labors.

Supply and demand is just a fact in the open market place and AMD sure needs more of those revenues so AMD can afford to do more than one base die tape-out per generation for discrete GPUs with at least one Gaming(ROP heavy) Focused GPU base die tape-out per generation. Nvidia has 5 different base die tape-outs per generation with Pascal getting GP100, GP102, GP104, GP106, GP108, while AMD has only one discret GPU die tape-out(Vega 10) that's very compute heavy and only offers 64 ROPs max.

The Vega GPU micro-arch is actually very efficient if you look at the Integrated Vega variants that have a smaller shaders to ROPs ratio with fewer power hungry shader cores. Vega only has isues for gaming when its compared to The GP102(96 ROPs available) based GTX 1080Ti that uses 88 out of 96 available ROPs to push out a much higher frame rate than Vega 56/64(64 ROPs max respectively) and even the GP104(64 ROPs max) based GTX 1080.

So Nvidia's Secret Sauce for gaming performance is mostly ROPs made available using one of its available base GPU die tape-outs(5 tape-outs) that has loads of available ROP for the FPS metrics winning SKUs like the GTX 1080Ti has available with its 88 ROPs. AMD just needs to respin a Vega base die tape-out with at least 96 available ROPs and the ability to trim shaders without taking any ROPs along with shaders and that will give AMD the necessary ROP counts to compete.

Navi is a whole other system in that Navi will be made of modular GPU die(Chiplets) and allow AMD to have the ability to scale up with GPU die(chiplet) counts like AMD does with its Zen/Zeppelin modular dies. So AMD can effectively increase its GPU die/wafer yields by using smaller dies and with modular dies scale up its GPU designs from low end to flagship by simply including more modular GPU die/Chiplets to suit the GPUs performance needs.

Navi will also allow AMD to produce compue/AI focused modular GPU dies and ROP/TMU heavy modular dies for gaming all with different shader to graphics units ratios to address the needs of different market segements across professional and consumer graphics markets. AMD will probably be including Tensor processing modular Dies for Navi to address that AI market more directly.

The real change with Navi will be that change toward modular GPU dies that can be scaled up easily more so than any new GPU micro-arch features above Vega's micro-arch, with the exception of maybe sepcilized TPU cores/dies for any Navi AI based SKUs.

The Infinity Fabric is AMD's scalable control/data fabric that will make Navi possible and that IF IP is already inside Zen/Vega. So that also points to maybe some dual Vega die based variants on one PCIe card that can be scaled up without needing any driver side(Cross Fire) types of solutions with any inefficient multi-GPU die scaling issues. AMD should be able to take 2 Vega dies and wire/interface them up via the Infinity Fabric and have the 2 Vega dies appear to software as just one larger logical GPU. Remember that the IF also supports cache coherency across processor dies and sockets so dual Vega GPUs on one PCIe card will be different for Vega/Navi than what came before.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.