Subject: Storage | May 30, 2018 - 07:28 PM | Allyn Malventano
Tagged: ssd, QLC, Optane DC, Optane, Intel, DIMM, 3D XPoint, 20TB
Lots of good stuff coming out of Intel's press event earlier today. First up is Optane, now (finally and officially) in a DIMM form factor!:
We have seen and tested Optane in several forms, but all so far have been bottlenecked by the interface and controller architectures. The only real way to fully realize the performance gains of 3D XPoint (how it works here) is to move away from the slower interfaces that are holding it back. A DIMM form factor is just the next logical step here.
Intel shows the new 'Optane DC Persistent Memory' as yet another tier up the storage/memory stack. The new parts will be available in 128GB, 256GB, and 512GB capacities. We don't have confirmation on the raw capacity, but based on Intel's typical max stack height of 4 dies per package, 3D XPoint's raw die capacity of 16GB, and a suspected 10 packages per DIMM, that should come to 640GB raw capacity. Combined with a 60 DWPD rating (up from 30DWPD for P4800X), this shows Intel is loosening up their design margins considerably. This makes sense as 3D XPoint was a radically new and unproven media when first launched, and it has now built up a decent track record in the field.
Bridging The Gap chart - part of a sequence from our first P4800X review.
Recall that even with Intel's Optane DC SSD parts like the P4800X, there remained a ~100x latency gap between the DRAM and the storage. The move to DIMMs should help Intel push closer to the '1000x faster than NAND' claims made way back when 3D XPoint was launched. Even if DIMMs were able to extract all possible physical latency gains from XPoint, there will still be limitations imposed by today's software architectures, which still hold many legacy throwbacks from the times of HDDs. Intel generally tries to help this along by providing various caching solutions that allow Optane to directly augment the OS's memory. These new DIMMs, when coupled with supporting enterprise platforms capable of logically segmenting RAM and NV DIMM slots, should be able to be accessed either directly or as a memory expansion tier.
Circling back to raw performance, we'll have to let software evolve a bit further to see even better gains out of XPoint platforms. That's likely the reason Intel did not discuss any latency figures for the new products today. My guess is that latencies should push down into the 1-3us range, splitting the difference between current generation DRAM (~80-100ns) and PCIe-based Optane parts (~10us). While the DIMM form factor is certainly faster, there is still a management layer at play here, meaning some form of controller or a software layer to handle wear leveling. No raw XPoint sitting on the memory bus just yet.
Also out of the event came talks about QLC NAND flash. Recently announced by Intel / Micron, along with 96-layer 3D NAND development, QLC helps squeeze higher capacities out of given NAND flash dies. Endurance does take a hit, but so long as the higher density media is coupled to appropriate client/enterprise workloads, there should be no issue with premature media wear-out or data retention. Micron has already launched an enterprise QLC part, and while Intel been hush-hush on actual product launches, they did talk about both client and enterprise QLC parts (with the latter pushing into 20TB in a 2.5" form factor).
Subject: Storage | November 15, 2017 - 09:59 PM | Allyn Malventano
Tagged: NVDIMM, XPoint, 3D XPoint, 32GB, NVDIMM-N, NVDIMM-F, NVDIMM-P, DIMM
We're finally starting to see NVDIMM materialize beyond the unobtanium. Micron recently announced 32GB NVDIMM-N:
These come with 32GB of DRAM plus 64GB of SLC NAND flash.
These are in the NVDIMM-N form factor and can offer some very impressive latency improvements over other non-volatile storage methods.
Next up is Intel, who recently presented at the UBS Global Technology Conference:
We've seen Intel's Optane in many different forms, and now it looks like we finally have a date for 3D XPoint DIMMs - 2nd half of 2018! There are lots of hurdles to overcome as the JEDEC spec is not yet finalized (and might not be by the time this launches). Motherboard and BIOS support also needs to be more widely adopted for this to take off as well.
Don't expect this to be in your desktop machine anytime soon, but one can hope!
Press blast for the Micron 32GB NVDIMM-N appears after the break.
Subject: General Tech, Memory, Storage | May 26, 2017 - 10:14 PM | Tim Verry
Tagged: XPoint, Intel, HPC, DIMM, 3D XPoint
Intel recently teased a bit of new information on its 3D XPoint DIMMs and launched its first public demonstration of the technology at the SAP Sapphire conference where SAP’s HANA in-memory data analytics software was shown working with the new “Intel persistent memory.” Slated to arrive in 2018, the new Intel DIMMs based on the 3D XPoint technology developed by Intel and Micron will work in systems alongside traditional DRAM to provide a pool of fast, low latency, and high density nonvolatile storage that is a middle ground between expensive DDR4 and cheaper NVMe SSDs and hard drives. When looking at the storage stack, the storage density increases along with latency as it gets further away from the CPU. The opposite is also true, as storage and memory gets closer to the processor, bandwidth increases, latency decreases, and costs increase per unit of storage. Intel is hoping to bridge the gap between system DRAM and PCI-E and SATA storage.
According to Intel, system RAM offers up 10 GB/s per channel and approximately 100 nanoseconds of latency. 3D XPoint DIMMs will offer 6 GB/s per channel and about 250 nanoseconds of latency. Below that is the 3D XPoint-based NVMe SSDs (e.g. Optane) on a PCI-E x4 bus where they max out the bandwidth of the bus at ~3.2 GB/s and 10 microseconds of latency. Intel claims that non XPoint NVMe NAND solid state drives have around 100 microsecomds of latency, and of course, it gets worse from there when you go to NAND-based SSDs or even hard drives hanging of the SATA bus.
Intel’s new XPoint DIMMs have persistent storage and will offer more capacity that will be possible and/or cost effective with DDR4 DRAM. In giving up some bandwidth and latency, enterprise users will be able to have a large pool of very fast storage for storing their databases and other latency and bandwidth sensitive workloads. Intel does note that there are security concerns with the XPoint DIMMs being nonvolatile in that an attacker with physical access could easily pull the DIMM and walk away with the data (it is at least theoretically possible to grab some data from RAM as well, but it will be much easier to grab the data from the XPoint sticks. Encryption and other security measures will need to be implemented to secure the data, both in use and at rest.
Interestingly, Intel is not positioning the XPoint DIMMs as a replacement for RAM, but instead as a supplement. RAM and XPoint DIMMs will be installed in different slots of the same system and the DDR4 RAM will be used for the OS and system critical applications while the XPoint pool of storage will be used for storing data that applications will work on much like a traditional RAM disk but without needing to load and save the data to a different medium for persistent storage and offering a lot more GBs for the money.
While XPoint is set to arrive next year along with Cascade Lake Xeons, it will likely be a couple of years before the technology takes off. Supporting it is going to require hardware and software support for the workstations and servers as well as developers willing to take advantage of it when writing their specialized applications. Fortunately, Intel started shipping the memory modules to its partners for testing earlier this year. It is an interesting technology and the DIMM solution and direct CPU interface will really let the 3D XPoint memory shine and reach its full potential. It will primarily be useful for the enterprise, scientific, and financial industries where there is a huge need for faster and lower latency storage that can accommodate massive (multiple terabyte+) data sets that continue to get larger and more complex. It is a technology that likely will not trickle down to consumers for a long time, but I will be ready when it does. In the meantime, I am eager to see what kinds of things it will enable the big data companies and researchers to do! Intel claims it will not only be useful at supporting massive in-memory databases and accelerating HPC workloads but for things like virtualization, private clouds, and software defined storage.
What are your thoughts on this new memory tier and the future of XPoint?
- Intel Has Started Shipping Optane Memory Modules
- Intel Optane Memory 32GB Review - Faster Than Lightning
- A Closer Look at Intel's Optane SSD DC P4800X Enterprise SSD Performance
Subject: Storage | July 31, 2013 - 05:34 AM | Tim Verry
Tagged: diablo technologies, DIMM, nand, flash memory, memory channel storage
Ottawa-based Diablo Technologies unveiled a new flash storage technology yesterday that it calls Memory Channel Storage. As the name suggest, the storage technology puts NAND flash into a DIMM form factor and interfaces the persistent storage directly with the processor via the integrated memory controller.
The Memory Channel Storage (MCS) is a drop-in replacement for DDR3 RDIMMs (Registered DIMMs) in servers and storage arrays. Unlike DRAM, MCS is persistent storage backed by NAND flash and it can allow servers to have Terabytes of storage connected to the CPU via the memory interface instead of mere Gigabytes of DRAM acting as either system memory or block storage. The photo provided in the technology report (PDF) shows a 400GB MCS module that can slot into a standard DIMM slot, for example.
Diablo Technologies claims that MCS exhibits lower latencies and higher bandwidth than PCI-E and SATA SSDs. More importantly, the storage latency is predictable and consistent, making it useful for applications such as high frequency stock trading where speed and deterministic latency are paramount. Further, users can get linear increases in throughput with each additional Memory Channel Storage module added to the system. Latencies with MCS are as much as 85% lower with PCI-E SSDs and 96% lower than SAS/SATA SSDs according to Diablo Technologies. NAND flash maintenance such as wear leveling is handled via an on-board logic chip.
Diablo Technologies is also aiming the MCS technology at servers running big data analytics, massive cloud databases, financial applications, server virtualization, and Virtual Desktop Infrastructure (VDI). MCS can act as super fast local storage or as a local cache for external (such as networked) storage in the form of mechanical hard drives or SSDs.
Some details about Memory Channel Storage are still unclear, but it looks promising. It will not be quite as fast in random access as DRAM but it will be better (more bandwidth, lower latency) than both PCI-E and SATA-connected NAND flash-based SSDs. It would be awesome to see this kind of tech make its way to consumer systems so that I can have a physical RAMDisk with fast persistent storage (at least as far as form factor, MCS uses NAND not DRAM chips).
The full press release can be found here.
Samsung recently released engineering samples of new 32GB DDR3 memory modules for evaluation. Specifically, the new modules are registered dual inline memory modules (RDIMMs) that use a “three dimensional through silicon via (TSV) package technology” that provide higher performance and density.
The new DIMMs are made from Samsung’s four gigabit 30 nanometer class NAND, and is capable of delivering 1,333 Mb/s. Further, they consume 4.5 watts of power per hour, which Samsung claims is among the lowest power consuming enterprise DIMMs. Compared to LRDIMMs (load reduced modules), the Samsung modules offer 30 percent energy savings.
The company claims that these power savings are the direct result of the through silicon via technology that allows them to vertically stack the NAND and maintain power levels comparable to single stacked chips. Further, the company stated that they are working with CPU and controller engineers to hasten the adoption rate of higher capacity DIMMs. No word yet on pricing or whether these DIMMs will ever see full production and enterprise usage in their current form.