Micron Launches 32GB NVDIMM-N - Intel Announces 3D XPoint NVDIMM

Subject: Storage | November 15, 2017 - 09:59 PM |
Tagged: NVDIMM, XPoint, 3D XPoint, 32GB, NVDIMM-N, NVDIMM-F, NVDIMM-P, DIMM

We're finally starting to see NVDIMM materialize beyond the unobtanium. Micron recently announced 32GB NVDIMM-N:

micron-nvdimm.png

These come with 32GB of DRAM plus 64GB of SLC NAND flash.

micron-nvdimm-modes.png

These are in the NVDIMM-N form factor and can offer some very impressive latency improvements over other non-volatile storage methods.

Next up is Intel, who recently presented at the UBS Global Technology Conference:

XPoint_DIMM.png

We've seen Intel's Optane in many different forms, and now it looks like we finally have a date for 3D XPoint DIMMs - 2nd half of 2018! There are lots of hurdles to overcome as the JEDEC spec is not yet finalized (and might not be by the time this launches). Motherboard and BIOS support also needs to be more widely adopted for this to take off as well.

Don't expect this to be in your desktop machine anytime soon, but one can hope!

Press blast for the Micron 32GB NVDIMM-N appears after the break.

Intel Persistent Memory Using 3D XPoint DIMMs Expected Next Year

Subject: General Tech, Memory, Storage | May 26, 2017 - 10:14 PM |
Tagged: XPoint, Intel, HPC, DIMM, 3D XPoint

Intel recently teased a bit of new information on its 3D XPoint DIMMs and launched its first public demonstration of the technology at the SAP Sapphire conference where SAP’s HANA in-memory data analytics software was shown working with the new “Intel persistent memory.” Slated to arrive in 2018, the new Intel DIMMs based on the 3D XPoint technology developed by Intel and Micron will work in systems alongside traditional DRAM to provide a pool of fast, low latency, and high density nonvolatile storage that is a middle ground between expensive DDR4 and cheaper NVMe SSDs and hard drives. When looking at the storage stack, the storage density increases along with latency as it gets further away from the CPU. The opposite is also true, as storage and memory gets closer to the processor, bandwidth increases, latency decreases, and costs increase per unit of storage. Intel is hoping to bridge the gap between system DRAM and PCI-E and SATA storage.

Intel persistent memory DIMM.jpg

According to Intel, system RAM offers up 10 GB/s per channel and approximately 100 nanoseconds of latency. 3D XPoint DIMMs will offer 6 GB/s per channel and about 250 nanoseconds of latency. Below that is the 3D XPoint-based NVMe SSDs (e.g. Optane) on a PCI-E x4 bus where they max out the bandwidth of the bus at ~3.2 GB/s and 10 microseconds of latency. Intel claims that non XPoint NVMe NAND solid state drives have around 100 microsecomds of latency, and of course, it gets worse from there when you go to NAND-based SSDs or even hard drives hanging of the SATA bus.

Intel’s new XPoint DIMMs have persistent storage and will offer more capacity that will be possible and/or cost effective with DDR4 DRAM. In giving up some bandwidth and latency, enterprise users will be able to have a large pool of very fast storage for storing their databases and other latency and bandwidth sensitive workloads. Intel does note that there are security concerns with the XPoint DIMMs being nonvolatile in that an attacker with physical access could easily pull the DIMM and walk away with the data (it is at least theoretically possible to grab some data from RAM as well, but it will be much easier to grab the data from the XPoint sticks. Encryption and other security measures will need to be implemented to secure the data, both in use and at rest.

Intel Slide XPoint Info.jpg

Interestingly, Intel is not positioning the XPoint DIMMs as a replacement for RAM, but instead as a supplement. RAM and XPoint DIMMs will be installed in different slots of the same system and the DDR4 RAM will be used for the OS and system critical applications while the XPoint pool of storage will be used for storing data that applications will work on much like a traditional RAM disk but without needing to load and save the data to a different medium for persistent storage and offering a lot more GBs for the money.

While XPoint is set to arrive next year along with Cascade Lake Xeons, it will likely be a couple of years before the technology takes off. Supporting it is going to require hardware and software support for the workstations and servers as well as developers willing to take advantage of it when writing their specialized applications. Fortunately, Intel started shipping the memory modules to its partners for testing earlier this year. It is an interesting technology and the DIMM solution and direct CPU interface will really let the 3D XPoint memory shine and reach its full potential. It will primarily be useful for the enterprise, scientific, and financial industries where there is a huge need for faster and lower latency storage that can accommodate massive (multiple terabyte+) data sets that continue to get larger and more complex. It is a technology that likely will not trickle down to consumers for a long time, but I will be ready when it does. In the meantime, I am eager to see what kinds of things it will enable the big data companies and researchers to do! Intel claims it will not only be useful at supporting massive in-memory databases and accelerating HPC workloads but for things like virtualization, private clouds, and software defined storage.

What are your thoughts on this new memory tier and the future of XPoint?

Also read:

Source: Intel

Diablo Technologies Unveils Memory Channel Storage

Subject: Storage | July 31, 2013 - 05:34 AM |
Tagged: diablo technologies, DIMM, nand, flash memory, memory channel storage

Ottawa-based Diablo Technologies unveiled a new flash storage technology yesterday that it calls Memory Channel Storage. As the name suggest, the storage technology puts NAND flash into a DIMM form factor and interfaces the persistent storage directly with the processor via the integrated memory controller.

The Memory Channel Storage (MCS) is a drop-in replacement for DDR3 RDIMMs (Registered DIMMs) in servers and storage arrays. Unlike DRAM, MCS is persistent storage backed by NAND flash and it can allow servers to have Terabytes of storage connected to the CPU via the memory interface instead of mere Gigabytes of DRAM acting as either system memory or block storage. The photo provided in the technology report (PDF) shows a 400GB MCS module that can slot into a standard DIMM slot, for example.

Diablo Technologies claims that MCS exhibits lower latencies and higher bandwidth than PCI-E and SATA SSDs. More importantly, the storage latency is predictable and consistent, making it useful for applications such as high frequency stock trading where speed and deterministic latency are paramount. Further, users can get linear increases in throughput with each additional Memory Channel Storage module added to the system. Latencies with MCS are as much as 85% lower with PCI-E SSDs and 96% lower than SAS/SATA SSDs according to Diablo Technologies. NAND flash maintenance such as wear leveling is handled via an on-board logic chip.

Diablo Technologies Memory Channel Storage.jpg

Diablo Technologies is also aiming the MCS technology at servers running big data analytics, massive cloud databases, financial applications, server virtualization, and Virtual Desktop Infrastructure (VDI). MCS can act as super fast local storage or as a local cache for external (such as networked) storage in the form of mechanical hard drives or SSDs.

Some details about Memory Channel Storage are still unclear, but it looks promising. It will not be quite as fast in random access as DRAM but it will be better (more bandwidth, lower latency) than both PCI-E and SATA-connected NAND flash-based SSDs. It would be awesome to see this kind of tech make its way to consumer systems so that I can have a physical RAMDisk with fast persistent storage (at least as far as form factor, MCS uses NAND not DRAM chips).

The full press release can be found here.

Samsung Releases Engineering Samples of 32GB Green DDR3 For Future Servers

Subject: Memory | August 19, 2011 - 10:47 AM |
Tagged: Samsung, DIMM, ddr3, 30nm

Samsung recently released engineering samples of new 32GB DDR3 memory modules for evaluation. Specifically, the new modules are registered dual inline memory modules (RDIMMs) that use a “three dimensional through silicon via (TSV) package technology” that provide higher performance and density.

The new DIMMs are made from Samsung’s four gigabit 30 nanometer class NAND, and is capable of delivering 1,333 Mb/s. Further, they consume 4.5 watts of power per hour, which Samsung claims is among the lowest power consuming enterprise DIMMs. Compared to LRDIMMs (load reduced modules), the Samsung modules offer 30 percent energy savings.

971108110817_3d-tsv_32gb_rdimm_l.jpg

The company claims that these power savings are the direct result of the through silicon via technology that allows them to vertically stack the NAND and maintain power levels comparable to single stacked chips. Further, the company stated that they are working with CPU and controller engineers to hasten the adoption rate of higher capacity DIMMs.  No word yet on pricing or whether these DIMMs will ever see full production and enterprise usage in their current form.

Source: Samsung