Intel Persistent Memory Using 3D XPoint DIMMs Expected Next Year

Subject: General Tech, Memory, Storage | May 26, 2017 - 10:14 PM |
Tagged: XPoint, Intel, HPC, DIMM, 3D XPoint

Intel recently teased a bit of new information on its 3D XPoint DIMMs and launched its first public demonstration of the technology at the SAP Sapphire conference where SAP’s HANA in-memory data analytics software was shown working with the new “Intel persistent memory.” Slated to arrive in 2018, the new Intel DIMMs based on the 3D XPoint technology developed by Intel and Micron will work in systems alongside traditional DRAM to provide a pool of fast, low latency, and high density nonvolatile storage that is a middle ground between expensive DDR4 and cheaper NVMe SSDs and hard drives. When looking at the storage stack, the storage density increases along with latency as it gets further away from the CPU. The opposite is also true, as storage and memory gets closer to the processor, bandwidth increases, latency decreases, and costs increase per unit of storage. Intel is hoping to bridge the gap between system DRAM and PCI-E and SATA storage.

Intel persistent memory DIMM.jpg

According to Intel, system RAM offers up 10 GB/s per channel and approximately 100 nanoseconds of latency. 3D XPoint DIMMs will offer 6 GB/s per channel and about 250 nanoseconds of latency. Below that is the 3D XPoint-based NVMe SSDs (e.g. Optane) on a PCI-E x4 bus where they max out the bandwidth of the bus at ~3.2 GB/s and 10 microseconds of latency. Intel claims that non XPoint NVMe NAND solid state drives have around 100 microsecomds of latency, and of course, it gets worse from there when you go to NAND-based SSDs or even hard drives hanging of the SATA bus.

Intel’s new XPoint DIMMs have persistent storage and will offer more capacity that will be possible and/or cost effective with DDR4 DRAM. In giving up some bandwidth and latency, enterprise users will be able to have a large pool of very fast storage for storing their databases and other latency and bandwidth sensitive workloads. Intel does note that there are security concerns with the XPoint DIMMs being nonvolatile in that an attacker with physical access could easily pull the DIMM and walk away with the data (it is at least theoretically possible to grab some data from RAM as well, but it will be much easier to grab the data from the XPoint sticks. Encryption and other security measures will need to be implemented to secure the data, both in use and at rest.

Intel Slide XPoint Info.jpg

Interestingly, Intel is not positioning the XPoint DIMMs as a replacement for RAM, but instead as a supplement. RAM and XPoint DIMMs will be installed in different slots of the same system and the DDR4 RAM will be used for the OS and system critical applications while the XPoint pool of storage will be used for storing data that applications will work on much like a traditional RAM disk but without needing to load and save the data to a different medium for persistent storage and offering a lot more GBs for the money.

While XPoint is set to arrive next year along with Cascade Lake Xeons, it will likely be a couple of years before the technology takes off. Supporting it is going to require hardware and software support for the workstations and servers as well as developers willing to take advantage of it when writing their specialized applications. Fortunately, Intel started shipping the memory modules to its partners for testing earlier this year. It is an interesting technology and the DIMM solution and direct CPU interface will really let the 3D XPoint memory shine and reach its full potential. It will primarily be useful for the enterprise, scientific, and financial industries where there is a huge need for faster and lower latency storage that can accommodate massive (multiple terabyte+) data sets that continue to get larger and more complex. It is a technology that likely will not trickle down to consumers for a long time, but I will be ready when it does. In the meantime, I am eager to see what kinds of things it will enable the big data companies and researchers to do! Intel claims it will not only be useful at supporting massive in-memory databases and accelerating HPC workloads but for things like virtualization, private clouds, and software defined storage.

What are your thoughts on this new memory tier and the future of XPoint?

Also read:

Source: Intel
Subject: Storage
Manufacturer: Intel

Introduction and Specifications

Introduction

Intel-3D-Xpoint.png

XPoint. Optane. QuantX. We've been hearing these terms thrown around for two years now. A form of 3D stackable non-volatile memory that promised 10x the density of DRAM and 1000x the speed and endurance of NAND. These were bold statements, and over the following months, we would see them misunderstood and misconstrued by many in the industry. These misconceptions were further amplified by some poor demo choices on the part of Intel (fortunately countered by some better choices made by Micron). Fortunately cooler heads prevailed as Jim Handy and other industry analysts helped explain that a 1000x improvement at the die level does not translate to the same improvement at the device level, especially when the first round of devices must comply with what will soon become a legacy method of connecting a persistent storage device to a PC.

Did I just suggest that PCIe 3.0 and the NVMe protocol - developed just for high-speed storage, is already legacy tech? Well, sorta.

ss-142.png

That 'Future NVM' bar at the bottom of that chart there was a 2-year old prototype iteration of what is now Optane. Note that while NVMe was able to shrink down the yellow bar a bit, as you introduce faster and faster storage, the rest of the equation (meaning software, including the OS kernel) starts to have a larger and larger impact on limiting the ultimate speed of the device.

800px-Nand_flash_structure.svg_.png

NAND Flash simplified schematic (via Wikipedia)

Before getting into the first retail product to push all of these links in the storage chain to the limit, let's explain how XPoint works and what makes it faster. Taking random writes as an example, NAND Flash (above) must program cells in pages and erase cells in blocks. As modern flash has increased in capacity, the sizes of those pages and blocks have scaled up roughly proportionally. At present day we are at pages >4KB and block sizes in the megabytes. When it comes to randomly writing to an already full section of flash, simply changing the contents of one byte on one page requires the clearing and rewriting of the entire block. The difference between what you wanted to write and what the flash had to rewrite to accomplish that operation is called the write amplification factor. It's something that must be dealt with when it comes to flash memory management, but for XPoint it is a completely different story:

reading_bits_in_crosspoint_array.jpg

XPoint is bit addressible. The 'cross' structure means you can select very small groups of data via Wordlines, with the ultimate selection resolving down to a single bit.

ss-141.png

Since the programmed element effectively acts as a resistor, its output is read directly and quickly. Even better - none of that write amplification nonsense mentioned above applies here at all. There are no pages or blocks. If you want to write a byte, go ahead. Even better is that the bits can be changed regardless of their former state, meaning no erase or clear cycle must take place before writing - you just overwrite directly over what was previously stored. Is that 1000x faster / 1000x more write endurance than NAND thing starting to make more sense now?

Ok, with all of the background out of the way, let's get into the meat of the story. I present the P4800X:

P4800X.jpg

Read on for our full review of the P4800X!

Intel Officially Launches Optane Memory, Shows Performance

Subject: Storage | March 27, 2017 - 12:16 PM |
Tagged: XPoint, Optane Memory, Optane, M.2, Intel, cache, 3D XPoint

We are just about to hit two years since Intel and Micron jointly launched 3D XPoint, and there have certainly been a lot of stories about it since. Intel officially launched the P4800X last week, and this week they are officially launching Optane Memory. The base level information about Optane Memory is mostly unchanged, however, we do have a slide deck we are allowed to pick from to point out some of the things we can look forward to once the new tech starts hitting devices you can own.

Optane Memory-6.png

Alright, so this is Optane Memory in a nutshell. Put some XPoint memory on an M.2 form factor device, leverage Intel's SRT caching tech, and you get a 16GB or 32GB cache laid over your system's primary HDD.

Optane Memory-15.png

To help explain what good Optane can do for typical desktop workloads, first we need to dig into Queue Depths a bit. Above are some examples of the typical QD various desktop applications run at. This data is from direct IO trace captures of systems in actual use. Now that we've established that the majority of desktop workloads operate at very low Queue Depths (<= 4), lets see where Optane performance falls relative to other storage technologies:

Optane Memory-22.png

There's a bit to digest in this chart, but let me walk you through it. The ranges tapering off show the percentage of IOs falling at the various Queue Depths, while the green, red, and orange lines ramping up to higher IOPS (right axis) show relative SSD performance at those same Queue Depths. The key to Optane's performance benefit here is that it can ramp up to full performance at very low QD's, while the other NAND-based parts require significantly higher parallel requests to achieve full rated performance. This is what will ultimately lead to a much snappier responsiveness for, well, just about anything hitting the storage. Fun fact - there is actually a HDD on that chart. It's the yellow line that you might have mistook as the horizontal axis :).

Optane Memory-11.png

As you can see, we have a few integrators on board already. Official support requires a 270 series motherboard and Kaby Lake CPU, but it is possible that motherboard makers could backport the required NVMe v1.1 and Intel RST 15.5 requirements into older systems.

Optane Memory-7.png

For those curious, if caching is the only way power users will be able to go with Optane, that's not the case. Atop that pyramid there sits an 'Intel Optane SSD', which should basically be a consumer version of the P4800X. It is sure to be an incredibly fast SSD, but that performance will most definitely come at a price!

We should be testing Optane Memory shortly and will finally have some publishable results of this new tech as soon as we can!

Source: Intel

Intel Officially Kicks Off Optane Launch with SSD DC P4800X

Subject: Storage | March 19, 2017 - 12:21 PM |
Tagged: XPoint, SSD DC P4800X, Optane Memory, Optane, Intel, client, 750GB, 3D XPoint, 375GB, 1.5TB

Intel brought us out to their Folsom campus last week for some in-depth product briefings. Much of our briefing is still under embargo, but the portion that officially lifts this morning is the SSD DC P4800X:

Intel_SSD_4800_FlatFront_OnWhite_RGB_Small.jpg

optane-4.png

optane-9.png

MSRP for the 375GB model is estimated at $1520 ($4/GB), which is rather spendy, but given that the product has shown it can effectively displace RAM in servers, we should be comparing the cost/GB with DRAM and not NAND. It should also be noted this is also nearly half the cost/GB of the X25-M at its launch. Capacities will go all the way up to 1.5TB, and U.2 form factor versions are also on the way.

For those wanting a bit more technical info, the P4800X uses a 7-channel controller, with the 375GB model having 4 dies per channel (28 total). Overprovisioning does not do for Optane what it did for NAND flash, as XPoint can be rewritten at the byte level and does not need to be programmed in (KB) pages and erased in larger (MB) blocks. The only extra space on Optane SSDs is for ECC, firmware, and a small spare area to map out any failed cells.

Those with a keen eye (and calculator) might have noted that the early TBW values only put the P4800X at 30 DWPD for a 3-year period. At the event, Intel confirmed that they anticipate the P4800X to qualify at that same 30 DWPD for a 5-year period by the time volume shipment occurs.

Read on for more about the SSD DC P4800X (and other upcoming products!)

Intel Details Optane Memory System Requirements

Subject: General Tech, Storage | February 21, 2017 - 07:14 PM |
Tagged: Optane, kaby lake, Intel, 3D XPoint

Intel has announced that its Optane memory will require an Intel Kaby Lake processor to function. While previous demonstrations of the technology used an Intel Skylake processor, it appears this configuration will not be possible on the consumer versions of the technology.

Intel Optane App Accelerator.jpg

Further, the consumer application accelerator drives will also require a 200-series chipset motherboard, and either a M.2 2280-S1-B-M or M.2 2242-S1-B-M connector with two or four PCI-E lanes. Motherboards will have to support NVMe v1.1 and Intel RST (Rapid Storage Technology) 15.5 or newer.

It is not clear why Intel is locking Optane technology to Kaby Lake and whether it is due to technical limitations that they were not able to resolve to keep Skylake compatible or if it is just a matter of not wanting to support the older platform and focus on its new Kaby Lake processors. As such, Kaby Lake is now required if you want UHD Blu Ray playback and Optane 3D XPoint SSDs.

What are your thoughts on this latest bit of Optane news? Has Intel sweetened the pot enough to encourage upgrade hold outs?

Also Read: 

 

Source: Bit-Tech

Xaggerated claims from Intel? Psah, we've heard that googolplex of times before

Subject: General Tech | August 12, 2016 - 01:16 PM |
Tagged: 3D XPoint, Intel, FMS 2016

You might have caught our reference to this on the podcast, XPoint is amazingly fast but the marketing clams were an order or magnitude or two off of the real performance levels.  Al took some very nice pictures at FMS and covered what Micron had to say about their new QuantX drives.  The Register also dropped by and offers a tidbit on the pricing, roughly four to five times as much as current flash or about half the cost of an equivalent amount of RAM.  They also compare the stated endurance of 25 complete drive writes per day to existing flash which offers between 10 to 17 depending on the technology used. 

The question they ask at the end is one many data centre managers will also be asking, is the actual speed boost worth the cost of upgrading or will other less expensive alternatives be more economical?

102021829-arms-full-of-money.530x298.jpg

"XPoint will substantially undershoot the 1,000-times-faster and 1,000-times-longer-lived-than-flash claims made by Intel when it was first announced – with just a 10-times speed boost and 2.5-times longer endurance in reality."

Here is some more Tech News from around the web:

Tech Talk

Source: The Register

Parsing the alphabet soup which is the current SSD market

Subject: Storage | November 10, 2015 - 01:10 PM |
Tagged: ssd, NVMe, M.2, M&A, 3D XPoint

This has been a huge year for SSDs with a variety of new technologies and form factors to keep track of, not to mention the wide variety of vendors now shipping SSDs with a plethora of controllers embedded within.  [H]ard|OCP has put together a guide to help you translate these acronyms into a form that will help you to make an informed buying decision.  You may already understand what NVMe offers or when 3D XPoint flash is the correct solution but have you memorized what U.2 A, B, E, and M connectors look like.  For information on those and more check out their article and consider bookmarking it for future reference.

1445512944TxNdmOHVTw_2_1.png

"Since our last SSD update article, the last 7 months have seen no shortage of exciting announcements, and the enthusiast market has rapidly evolved in both positive and confusing ways. Let’s get up to speed on U.2, NVMe, 3D XPoint, M&A, and the rest of the buzzword soup that make up this market."

Here are some more Storage reviews from around the web:

Storage

Source: [H]ard|OCP