Intel Launches Optane DC Persistent Memory (DIMMs), Talks 20TB QLC SSDs

Subject: Storage | May 30, 2018 - 07:28 PM |
Tagged: ssd, QLC, Optane DC, Optane, Intel, DIMM, 3D XPoint, 20TB

Lots of good stuff coming out of Intel's press event earlier today. First up is Optane, now (finally and officially) in a DIMM form factor!:

Intel-Optane-Persistent-memory-1-.jpg

We have seen and tested Optane in several forms, but all so far have been bottlenecked by the interface and controller architectures. The only real way to fully realize the performance gains of 3D XPoint (how it works here) is to move away from the slower interfaces that are holding it back. A DIMM form factor is just the next logical step here.

filling-the-gaps-between-memory-and-storage-after.png

Intel shows the new 'Optane DC Persistent Memory' as yet another tier up the storage/memory stack. The new parts will be available in 128GB, 256GB, and 512GB capacities. We don't have confirmation on the raw capacity, but based on Intel's typical max stack height of 4 dies per package, 3D XPoint's raw die capacity of 16GB, and a suspected 10 packages per DIMM, that should come to 640GB raw capacity. Combined with a 60 DWPD rating (up from 30DWPD for P4800X), this shows Intel is loosening up their design margins considerably. This makes sense as 3D XPoint was a radically new and unproven media when first launched, and it has now built up a decent track record in the field.

gap-3.png

Bridging The Gap chart - part of a sequence from our first P4800X review.

Recall that even with Intel's Optane DC SSD parts like the P4800X, there remained a ~100x latency gap between the DRAM and the storage. The move to DIMMs should help Intel push closer to the '1000x faster than NAND' claims made way back when 3D XPoint was launched. Even if DIMMs were able to extract all possible physical latency gains from XPoint, there will still be limitations imposed by today's software architectures, which still hold many legacy throwbacks from the times of HDDs. Intel generally tries to help this along by providing various caching solutions that allow Optane to directly augment the OS's memory. These new DIMMs, when coupled with supporting enterprise platforms capable of logically segmenting RAM and NV DIMM slots, should be able to be accessed either directly or as a memory expansion tier.

Circling back to raw performance, we'll have to let software evolve a bit further to see even better gains out of XPoint platforms. That's likely the reason Intel did not discuss any latency figures for the new products today. My guess is that latencies should push down into the 1-3us range, splitting the difference between current generation DRAM (~80-100ns) and PCIe-based Optane parts (~10us). While the DIMM form factor is certainly faster, there is still a management layer at play here, meaning some form of controller or a software layer to handle wear leveling. No raw XPoint sitting on the memory bus just yet.

Also out of the event came talks about QLC NAND flash. Recently announced by Intel / Micron, along with 96-layer 3D NAND development, QLC helps squeeze higher capacities out of given NAND flash dies. Endurance does take a hit, but so long as the higher density media is coupled to appropriate client/enterprise workloads, there should be no issue with premature media wear-out or data retention. Micron has already launched an enterprise QLC part, and while Intel been hush-hush on actual product launches, they did talk about both client and enterprise QLC parts (with the latter pushing into 20TB in a 2.5" form factor).

Press blast for Optane DC Persistent Memory appears after the break (a nicer layout is available by clicking the source link).

Intel, Micron Jointly Announce QLC NAND FLASH, 96-Layer 3D Development

Subject: Storage | May 21, 2018 - 04:31 PM |
Tagged: ssd, QLC, NVMe, nand, Intel, Floating Gate, flash, die, 1Tbit

In tandem with Micron's launch of their new enterprise QLC SSDs, there is a broader technology announcement coming out of Intel today. This release covers the fact that Intel and Micron have jointly developed shippable 64-Layer 3D QLC NAND.

IMFT_Micron 3D NAND.png

IMFT's 3D NAND announcement came back in early 2015, and Intel/Micron Flash Technologies have been pushing their floating gate technology further and further. Not only do we have the QLC announcement today, but with it came talks of progress on 96-layer development as well. Combining QLC with 96-Layer would yield a single die capacity of 1.5 Tbit (192GB), up from the 1 Tbit (128GB) capacity of the 64-Layer QLC die that is now in production.

progression-3-.png

This new flash won't be meant for power users, but should be completely usable in a general use client SSD, provided there is a bit of SLC (or 3D XPoint???) cache on the front end. QLC does store 33% more data per the same die space, which should eventually translate to a lower $/GB once development costs have been recouped. Here's hoping for lower cost SSDs in the future!

Press blast after the break!

Micron Launches 5210 ION - First QLC NAND Enterprise SATA SSD

Subject: Storage | May 21, 2018 - 04:30 PM |
Tagged: ssd, sata, QLC, nand, micron, enterprise

For those that study how flash memory stores bits, Quad Level Cell technology is a tricky thing to pull off in production. You are taking a single NAND Flash cell and change its stored electron count in such a way that you can later discriminate between SIXTEEN different states.

SLC MLC TLC QLC.png

...we're talking a countable number (dozens to hundreds) of electrons making the difference between a stored 0101 or 0110 in a given cell. Pulling that off in production-capable parts is no small feat, and doing so for enterprise usage first is definitely a bold move. Enter Micron:

5210_SATA_ION_2.5_bottom_transparent_labled_1000x667.png

The 5210 ION line is a SATA product meant for enterprise usages where the workload is primarily reading. This comes in handy for things like real-time data analytics and content delivery systems, where data is infrequently written but needs to be readable at latencies faster than what HDD's can provide.

ION ECO PRO.png

These are 2.5" 7mm SSDs that will be available from 1.92TB to 7.68TB (yes, 2TB is the *smallest* available capacity for these!). The idea is to enable an easy upgrade path for larger data systems that already employ SATA or SAS (SAS systems are typically cross-compatible with SATA). For backplanes that are designed for slimmer 7mm drives, this can make for some extreme densities.

These are currently being sampled to some big data companies and should see more general availability in a few months time. Press blast from Micron appears after the break.

Source: Micron

FMS 2017: Samsung Announces QLC V-NAND, 16TB NGSFF SSD, Z-SSD V2, Key Value

Subject: Storage, Shows and Expos | August 8, 2017 - 05:37 PM |
Tagged: z-ssd, vnand, V-NAND, Samsung, QLC, FMS 2017, 64-Layer, 3d, 32TB, 1Tbit

As is typically the case for Flash Memory Summit, the Samsung keynote was chock full of goodies:

170808-120053.jpg

Samsung kicked off by stating there are a good 5 years of revisions left in store for their V-NAND line, each with a corresponding increase in speed and capacity.

170808-120143.jpg

While V-NAND V4 was 64-layer TLC, V5 is a move to QLC, bringing per die capacity to 1Tbit (128 GB per die).

170808-121035.jpg

If you were to stack 32 of these new V5 dies per package, and do so in a large enough 2.5" housing, that brings the maximum capacity of such a device to a whopping 128TB!

170808-120245.jpg

Samsung also discussed a V2 of their Z-NAND, moving from SLC to MLC while only adding 2-3 us of latency per request. Z-NAND is basically a quicker version of NAND flash designed to compete with 3D XPoint.

170808-121123.jpg

M.2 SSDs started life with the working title of NGFF. Fed up with the limitations of this client-intended form factor for the enterprise, Samsung is pushing a slightly larger NGSFF form factor that supports higher capacities per device. Samsung claimed a PM983 NGSFF SSD will hold 16TB, a 1U chassis full of the same 576TB, and a 2U chassis pushing that figure to 1.15PB.

170808-121423.jpg

Last up is 'Key Value'. This approach allows the flash to be accessed more directly by the application layer, enabling more efficient use of the flash and therefore higher overall performance.

There were more points brought up that we will be covering later on, but for now here is the full press release that went out during the keynote: (after the break)

Podcast #461 - AMD Ryzen 3, Threadripper, Logitech Powerplay, and more!

Subject: General Tech | August 3, 2017 - 12:00 PM |
Tagged: podcast, wolfenstein, wdc, Vibe, Vega Nano, Threadripper, ryzen 3, radeon rx vega, QLC, htc, Fanatec, Clubsport lite elite, BiCS3, amd, video

PC Perspective Podcast #461 - 08/03/17

Join us for AMD Ryzen 3, Threadripper, Logitech Powerplay, and more!

You can subscribe to us through iTunes and you can still access it directly through the RSS page HERE.

The URL for the podcast is: http://pcper.com/podcast - Share with your friends!

Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, Allyn Malventano

Peanut Gallery: Ken Addison, Alex Lustenberg

Program length: 1:38:20

Podcast topics of discussion:
  1. Week in Review:
  2. News items of interest:
  3. Hardware/Software Picks of the Week
    1. 1:25:45 Ryan: Logitech G903
    2. 1:34:05 Allyn: Things I would have loved to grow up learning / playing (pixel kit): 1 2
  4. Closing/outro

Source:

Western Digital BiCS3 Flash Goes QLC - 96GB per die!

Subject: Storage | August 2, 2017 - 06:21 PM |
Tagged: BiCS3, western digital, wdc, WD, tlc, slc, QLC, nand, mlc, flash, 96GB, 768Gb, 3d

A month ago, WD and Toshiba each put out releases related to their BiCS 3D Flash memory. WD announced 96 layers (BiCS4) as their next capacity node, while Toshiba announced them reliably storing four bits per cell (QLC).

FMS-QLC.jpg

WD recently did their own press release related to QLC, partially mirroring Toshiba's announcement, but this one had some additional details on capacity per die, as well as stating their associated technology name used for these shifts. TLC was referred to as "X3", and "X4" is the name for their QLC tech as applied to BiCS. The WD release stated that X4 tech, applied to BiCS3, yields 768Gbit (96GB) per die vs. 512Gbit (64GB) per die for X3 (TLC). Bear in mind that while the release (and the math) states this is a 50% increase, moving from TLC to QLC with the same number of cells does only yields a 33% increase, meaning X4 BiCS3 dies need to have additional cells (and footprint) to add that extra 17%.

The release ends by hinting at X4 being applied to BiCS4 in the future, which is definitely exciting. Merging the two recently announced technologies would yield a theoretical 96-layer BiCS4 die, using X4 QLC technology, yielding 1152 Gbit (144GB) per die. A 16 die stack of which would come to 2,304 GB (1.5x the previously stated 1.5TB figure). The 2304 figure might appear incorrect but consider that we are multiplying two 'odd' capacities together (768 Gbit (1.5x512Gbit for TLC) and 96 layers (1.5x64 for X3).

Press blast appears after the break.

Toshiba and Western Digital announce QLC and 96-Layer BiCS Flash

Subject: Storage | June 28, 2017 - 09:49 PM |
Tagged: wdc, WD, toshiba, QLC, nand, BiCS, 96-layer, 3d

A couple of announcements out of Toshiba and Western Digital today. First up is Toshiba announcing QLC (4 bit per cell) flash on their existing BiCS 3 (64-layer) technology. QLC may not be the best for endurance as the voltage tolerances become extremely tight with 16 individual voltage states per cell, but Toshiba has been working on this tech for a while now.

FMS-QLC.jpg

In the above slide from the Toshiba keynote at last year's Flash Memory Summit, we see the use case here is for 'archival grade flash', which would still offer fast reads but is not meant to be written as frequently as MLC or TLC flash. Employing QLC in Toshiba's current BiCS 3 (64-layer) flash would enable 1.5TB of storage in a 16-die stack (within one flash memory chip package).

roadmap.png

Next up is BiCS 4, which was announced by Western Digital. We knew BiCS 4 was coming but did not know how many layers it would be. We now know that figure, and it is 96. The initial offerings will be the common 256Gbit (32GB) capacity per die, but stacking 96 cells high means the die will come in considerably smaller, meaning more per wafer, ultimately translating to lower cost per GB in your next SSD.

While these announcements are welcome, their timing and coordinated launch from both companies seems odd. Perhaps it has something to do with this?