Intel Has Started Shipping Optane Memory Modules

Subject: Memory | February 3, 2017 - 08:42 PM |
Tagged: XPoint, server, Optane, Intel Optane, Intel, big data

Last week Hexus reported that Intel has begun shipping Optane memory modules to its partners for testing. This year should see the launch of both these enterprise products designed for servers as well as tiny application accelerator M.2 solid state drives based on the Intel and Micron joint 3D memory venture. The modules that Intel is shipping are the former type of Optane memory and will be able to replace DDR4 DIMMs (RAM) with a memory solution that is not as fast but is cheaper and has much larger storage capacities. The Optane modules are designed to slot into DDR4 type memory slots on server boards. The benefit for such a product lies in big data and scientific workloads where massive datasets will be able to be held in primary memory and the processor(s) will be able to access the data sets at much lower latencies than if it had to reach out to mass storage on spinning rust or even SAS or PCI-E solid state drives. Being able to hold all the data being worked on in one pool of memory will be cheaper with Optane as well as it is allegedly priced closer to NAND than RAM and the cost of RAM adds up extremely quickly when you need many terabytes of it (or more!). Various technologies attempting to bring higher capacity non volatile and/or flash-based storage in memory module form have been theorized or in the works in various forms for years now, but it appears that Intel will be the first ones to roll out actual products.

View Full Size

It will likely be years before the technology trickles down to consumer desktops and notebooks, so slapping what would effectively be a cheap RAM disk into your PC is still a ways out. Consumers will get a small taste of the Optane memory in the form of tiny storage drives that were rumored for a first quarter 2017 release following its Kaby Lake Z270 motherboards. Previous leaks suggest that the Intel Optane Memory 8000P would come in 16 GB and 32 GB capacities in a M.2 form factor. With a single 128-bit (16 GB) die Intel is able to hit speeds that current NAND flash based SSDs can only hit with multiple dies. Specifically the 16GB Optane application accelerator drive is allegedly capable of 285,000 random 4K IOPS, 70,000 random write 4K IOPS, Sequential 128K reads of 1400 MB/s, and sequential 128K writes of 300 MB/s. The 32GB Optane drive is a bit faster at 300,000 4K IOPS, 120,000 4K IOPS, 1600 MB/s, and 500 MB/s respectively.

Unfortunately, I do not have any numbers on how fast the Optane memory that will slot into the DDR4 slots will be, but seeing as two dies already max out the x2 PCI-E link they use in the M.2 Optane SSD, a dual sided memory module packed with rows of Optane dies on the significantly wider memory bus is very promising. It should lie somewhere closer to (but slower than) DDR4 but much faster than NAND flash while still being non volatile (it doesn't need constant power to retain the data).

I am interested to see what the final numbers are for Intel's Optane RAM and Optane storage drives. The company has certainly dialed down the hype for the technology as it approached fruition though that may be more to do with what they are able to do right now versus what the 3D XPoint memory technology itself is potentially capable of enabling. I look forward to what it will enable in the HPC market and eventually what will be possible for the desktop and gaming markets.

What are your thoughts on Intel and Micron's 3D XPoint memory and Intel's Optane implementation (Micron's implementation is QuantX)?

Also read:

Source: Hexus

February 3, 2017 | 09:37 PM - Posted by Paul A. Mitchell (not verified)

4 x NVMe 2.5" SSD in RAID-0 = 31,507.2 MB/sec @ PCIe 4.0
4 x NVMe 2.5" SSD in RAID-0 = 15,753.6 MB/sec @ PCIe 3.0
compare DDR3-1600 x 8 = 12,800.0 MB/sec

February 3, 2017 | 09:47 PM - Posted by razor512

For the M.2 slots, that are on the PCH, you often have a total of about 4GB/s that is shared between everything on the PCH.

Beyond that, DDR3 1600 9-9-24 1T often does a little over 21-23GB/s in a dual channel config, with latency in the 40-50ns range.

Speaking of DDR3, I wonder, why can't they make a motherboard where the PCH can have a DDR3 memory controller, where users can recycle their old DDR3 RAM by sticking it into a new build, and then have that old RAM act as a RAM disk for temp files and caching. Throwing the cache of photoshop and many video editors into a RAM does does offer a noticeable improvement, and something like that would be useful now, and at little to no cost for many users upgrading to that platform?

February 4, 2017 | 05:41 PM - Posted by Paul A. Mitchell (not verified)

> why can't they make a motherboard where the PCH can have a DDR3 memory controller, where users can recycle their old DDR3 RAM by sticking it into a new build, and then have that old RAM act as a RAM disk for temp files and caching.

I agree COMPLETELY, particularly if there were
empty SODIMM sockets for this purpose, which also
save a lot of motherboard real estate!

Another variation of that idea is to modify
triple-channel memory subsystems, and dedicate
the third channel for this same purpose:

the other 2 channels can run in dual-channel mode,
or if there are 2 sockets for each of those 2 channels,
then those 2 channels can run in quad-channel mode.

FYI: we were very recently awarded a Utility Patent
by the U.S. Patent and Trademark Office for our
invention called "BayRAM Five":

And, a "premium" option would be to mount
Optane (or other compatible NV-DRAM)
on the SO-DIMM form factor, with compatible
edge connectors e.g. DDR3 or DDR4 JEDEC standards.

The key is to satisfy the protocol with perfection,
then the real differences are technically TRANSPARENT
to the chipset and memory controller logic.

As long as these non-volatile SO-DIMMs
have edge connectors that look just like
DDR3 or DDR4 DIMMs, the world will be onto an
entirely new range of exciting development


February 5, 2017 | 06:24 PM - Posted by Tim Verry

That would actually be really cool. The CPU would need two memory controllers though (or well ig they could put the memory controller in thr PCH..) and they'd have to make the DMI link between cpu and PCH a heck of a lot wider!

February 3, 2017 | 09:39 PM - Posted by razor512

The optane memory seems like it will need applications that are optimized to take advantage of it. I wonder when we will get any examples of it showing a meaningful real world benefit.

Even if it offers a benefit, will the price to performance ratio make it a good buy? For example, suppose the price per GB is 30% lower than DRAM, but only offers 10% of the benefit of buying RAM. then the user would be better off spending a little more and getting more RAM.

February 4, 2017 | 05:58 PM - Posted by Paul A. Mitchell (not verified)

> The optane memory seems like it will need applications that are optimized to take advantage of it.

No problema! There are already lots of applications
which can benefit from this type of memory/storage
e.g. re-locate your Firefox browser cache:

One of my other favorites, conceived many moons ago,
is a "Format RAM" option in BIOS/UEFI subsystems:

I can easily visualize a "memory-resident OS"
stored in Non-Volatile DRAM.



/s/ MRFS (Memory-Resident File Systems)

February 8, 2017 | 03:08 AM - Posted by BlackDove (not verified)

Pre exascale and exascale computers will require better checkpointing. Nonvolatile RAM in the form of Optane STT-MRAM and carbon nanotube RAM are all being researched for that specific application, which will be a massive market by 2018.

February 4, 2017 | 12:50 PM - Posted by CNote

I wish you could just slap a couple sticks of even sodimms on your motherboard for a ram disk

February 4, 2017 | 12:54 PM - Posted by willmore

So, Optane memory on a next gen AMD GPU? That would be somewhat ironic.

February 4, 2017 | 06:34 PM - Posted by Anonymous (not verified)

Not really considering Optain is just Intel's branding for XPoint, and Micron will have its competing XPoint products also under Micron's brand name QuantX! There will competition for XPoint memory and AMD can source its supplies from Micron or Intel, whichever is the best deal.

What is Ironic is the Moronic lack of product knowledge among some in the gaming community, even after there being many XPoint articles over the last few years about the Intel/Micron XPoint development collaboration.

The author of this article even asked: "What are your thoughts on Intel and Micron's 3D XPoint memory and Intel's Optane implementation (Micron's implementation is QuantX)?"

So what makes you think that AMD needs Intel's Optain any more than Micron's QuantX they are the same XPoint technology!

February 5, 2017 | 11:07 AM - Posted by willmore

The article mentions that Intel is shipping. Nothing about Micron shipping. So, why would I care about Micron?

February 6, 2017 | 11:25 AM - Posted by Anonymous (not verified)

Oh You'll care about Micron as Intel will not be overcharging for its Brand of XPoint with Micron producing a competing Brand of XPoint. So AMD will be able to get a competitively priced source of XPoint to use in its memory products.

Do you like paying more for technology than you would have to pay if there was no competition.

February 8, 2017 | 03:20 AM - Posted by BlackDove (not verified)

Micron and Intel kinda worked together to make it.

February 4, 2017 | 04:00 PM - Posted by FriedPenguin (not verified)

Ugh. Anyone proofread anymore?! Accerlator? Micro? You mean accelerator and Micron. Can't even finish reading because I can't get past the first paragraph!

February 5, 2017 | 06:52 PM - Posted by Tim Verry

You can proceed past those typos now :-). Thanks

February 4, 2017 | 06:54 PM - Posted by Anonymous (not verified)

I'll take some XPoint dies on the DIMM module for laptops along with the regular DRAM dies for some extra fast low latency OS/paging file uses and let the DIMM module's controller handle the movement in the background by a backplane DIMM module BUS to and from the DRAM dies and XPoint dies/NVM on the DIMM Module! Imagine having loads of memory to NVM transfers that take no extra bandwidth load on the CPU to DIMM memory channels and you can see that having XPoint on the DIMM module will be great for laptops and PCs.

And also do this for HBM2 with an amended JEDEC/HBM# standard for adding an XPoint die to the HBM's DRAM die stacks that can use TSV for a backplane bus to move data form the DRAM dies to the XPoint Dies in the background. XPoint's durability will have to be tested and proven before it can be used on a new HBM#/NVM JEDEC standard.

February 5, 2017 | 11:41 PM - Posted by Jordan Viray (not verified)

32GB is iffy even for an OS drive but 75K read IOPS at 4KQD1 is too compelling even if cost per GB ends up being closer to DRAM than NAND. It's like X25 all over again.

February 6, 2017 | 10:28 AM - Posted by Paul A. Mitchell (not verified)

Here's a thought, which is very close to what
we are doing here with our workstations:

Begin with at least 2 JBOD SSDs.

Format each SSD with 2 partitions: OS + data

First partition on each is 50GB.

OS is installed on both partitions, mirror copies.

When preferred OS gets corrupted or malfunctions,
boot from the second OS.

Use second OS to restore good drive image
of first OS.

Boot from first OS.

What I would really like to see is a
much faster way of switching to the
second OS, for the limited purpose
of restoring the drive image of the
first OS.

If each OS is already "memory-resident",
the "switch" to hand control over to
the second OS should be a user option.

NVDRAM should make this "switching"
happen a LOT faster than the time
now required to boot an OS and
re-load RAM from Nand Flash SSDs
or rotating HDDs.

So, to elaborate this idea a bit more:

A triple-channel chipset dedicates
one of 3 channels to Optane-like NVDRAM.

That NVDRAM is formatted with 2 partitions,
and mirror copies of our OS are installed
in both partitions.

Call the second OS a "hot spare OS" :)

Then, if the first OS gets into trouble,
a user option switches to the second OS.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.