Intel Officially Launches Optane Memory, Shows Performance

Subject: Storage | March 27, 2017 - 12:16 PM |
Tagged: XPoint, Optane Memory, Optane, M.2, Intel, cache, 3D XPoint

We are just about to hit two years since Intel and Micron jointly launched 3D XPoint, and there have certainly been a lot of stories about it since. Intel officially launched the P4800X last week, and this week they are officially launching Optane Memory. The base level information about Optane Memory is mostly unchanged, however, we do have a slide deck we are allowed to pick from to point out some of the things we can look forward to once the new tech starts hitting devices you can own.

View Full Size

Alright, so this is Optane Memory in a nutshell. Put some XPoint memory on an M.2 form factor device, leverage Intel's SRT caching tech, and you get a 16GB or 32GB cache laid over your system's primary HDD.

View Full Size

To help explain what good Optane can do for typical desktop workloads, first we need to dig into Queue Depths a bit. Above are some examples of the typical QD various desktop applications run at. This data is from direct IO trace captures of systems in actual use. Now that we've established that the majority of desktop workloads operate at very low Queue Depths (<= 4), lets see where Optane performance falls relative to other storage technologies:

View Full Size

There's a bit to digest in this chart, but let me walk you through it. The ranges tapering off show the percentage of IOs falling at the various Queue Depths, while the green, red, and orange lines ramping up to higher IOPS (right axis) show relative SSD performance at those same Queue Depths. The key to Optane's performance benefit here is that it can ramp up to full performance at very low QD's, while the other NAND-based parts require significantly higher parallel requests to achieve full rated performance. This is what will ultimately lead to a much snappier responsiveness for, well, just about anything hitting the storage. Fun fact - there is actually a HDD on that chart. It's the yellow line that you might have mistook as the horizontal axis :).

View Full Size

As you can see, we have a few integrators on board already. Official support requires a 270 series motherboard and Kaby Lake CPU, but it is possible that motherboard makers could backport the required NVMe v1.1 and Intel RST 15.5 requirements into older systems.

View Full Size

For those curious, if caching is the only way power users will be able to go with Optane, that's not the case. Atop that pyramid there sits an 'Intel Optane SSD', which should basically be a consumer version of the P4800X. It is sure to be an incredibly fast SSD, but that performance will most definitely come at a price!

We should be testing Optane Memory shortly and will finally have some publishable results of this new tech as soon as we can!

Source: Intel

March 27, 2017 | 12:50 PM - Posted by Anonymous (not verified)

Endurance testing is a must, and that includes testing to failure as the over provisioning must be there for any long term endurance over any of NAND based over provisioning regimens currently used by the NAND based SSD/M2 makers.

I do not trust Intel's slide decks pick and choose, I'll wait for any endurance to failure testing metrics on any randomly chosen store bought samples.

These first Optane SKUs will require the latest Intel CPU/MB SKUs(?).

March 27, 2017 | 01:03 PM - Posted by zme-ul

it's rather irrelevant since Intel Optane is just cache - if it fails you toss it, no data loss

March 27, 2017 | 01:26 PM - Posted by Jeppe (not verified)

Optane is not just for cache, sure, it can be used as a cache but it obviously can be used as pure storage or storage tiering (including storage tiering with ram)

March 27, 2017 | 01:31 PM - Posted by zme-ul

as of now, Optane is just cache - can't even be used to install Windows on it

March 27, 2017 | 03:28 PM - Posted by serpico (not verified)

I think you mean optane memory is just cache. The incoming optane p4800x is not cache.

March 27, 2017 | 02:17 PM - Posted by Allyn Malventano

OP doesn't work the same for Optane as it does for NAND. There will be a few spares, sure, but it's not like it needs a complete extra set of storage to swap in because endurance is so bad. Further to your concern, Intel would not have launched a product meant to handle enterprise workloads 24/7 for 3+ years without the endurance being where it needs to be. Simple client / caching workloads are going to be insignificant compared to the other applications that have already proven themselves enough for Intel to stand behind it. To put it simpler, if the endurance isn't there, Intel will be RMAing a lot of drives. I don't see them doing that to themselves.

March 27, 2017 | 03:22 PM - Posted by Anonymous (not verified)

Do you know if it tracks wear on the cells some how? If it does, how does it track that? You can't track wear level on a byte by byte basis as it would take way too much resources. Does it use cache line size blocks for tracking wear levels maybe?

March 27, 2017 | 04:01 PM - Posted by Allyn Malventano

I have been told that there is wear leveling at play, however, we do not know the resolution of that leveling. Optane Memory likely uses host managed leveling and a simpler controller on the device itself, and with the host system resources behind it and such a small storage area, we could see wear leveling approaching the byte level.

March 27, 2017 | 03:52 PM - Posted by Anonymous (not verified)

Yes but that is enterprise grade Optane with any enterprise levels of Optane over provisioning to make Optane meet that enterprise rating. Also if any consumer Optane is used for Intel Rapid Storage Technology/Smart Response Technology and sleep/hibernation file storage it's better to remember to save your open work before sleeping least on wake up things may possibly go wrong especially on any brand new Technology/IP like XPoint.

It's the comsumer levels of Optane over provisioning and durability/endurance that may be of more worry as the enterprise users probably have any Intel Optane Enterprise SKU samples in testing under NDA to properly vet any enterprise branded Optane usage for the professional market.

The consumer Optane SKUs need to be tested to failure under all the conditions that any consumer Optane M2/other drves may be used such as laptop/other sleep/wake accelerator usage cycle testing/other testing. Any maker's XPoint products are going to have to prove themselves in the field so I'm inclined to take a wait and see approach for Both Intel's and Micron's XPoint products with respect to XPoint's actual real life durability/endurance before I'll feel comfortable making a purchase.

March 27, 2017 | 04:05 PM - Posted by Allyn Malventano

Client XPoint dies will still have to pass JEDEC testing to validate it at the intended amount of wear, which includes a powered down data retention for a period of 3-12 months - measured at the end of life of the NV storage. Since all of the people using these will be using them before their end of life, data retention periods should be even greater than the rating, which assumes the worst case wear at the end of the warranty period. Also, client data retention requirements are *more stringent* than enterprise parts (client = 52 weeks / enterprise = 12 weeks).

And again, OP does not work the same for Optane as it does for NAND devices. Intel does not appear to be nearly as concerned with cells 'burning out' over the products usable lifetime as they are with NAND.

March 27, 2017 | 01:11 PM - Posted by Anonymous (not verified)

So Intel Optane Memory is for Word and Excel (among others). Interesting.

March 27, 2017 | 04:12 PM - Posted by Allyn Malventano

Basically. It's intended to help speed up typical user activities on systems that come with a large HDD. For a fraction of the cost of a 1TB SSD, you can get a 1TB HDD plus a 32GB Optane Memory module, and if you are doing typical productivity tasks and occasional gaming, you'll probably get close to pure SSD performance most of the time.

March 27, 2017 | 05:02 PM - Posted by Anonymously Anonymous (not verified)

yea, but we've already seen hybrid drives at work via Seagate. I know, I've got one from 4 years ago. That small amount of optane memory won't do that much for OS boot times, which sounds like it will still suck balls since 32 GB is not much for your OS and ALL of your other apps.

March 29, 2017 | 12:45 PM - Posted by Markonen (not verified)

Wow so typical users will drop $77 on a 32GB cache drive to "probably get close" "most of the time" to pure SSD performance, instead of dropping the same amount on a good 256GB SSD drive where they can put probably EVERYTHING they do? Its makes 0 sense. There is no market for this.

May 3, 2017 | 09:29 AM - Posted by amadsilentthirst

Sure, I would. Why not.

Present system 256GB SSD and a external 4TB storage drive.

I'm always shifting stuff onto the 4TB my system is fast granted but it's a little cumbersome.

If I sprang for a Optane, I'd sell my SSD or use it for a portable drive, or in another system.
My main system would now be a 4TB drive @ SSD speeds (Optane being the invisible cache)

I couldn't buy 4TB SSD for $77

Now obviously not everything on my system needs to be accessed fast but the Convenience of having ALL my stuff on the same drive becomes more and more relevant the longer you shuffle things about.

March 27, 2017 | 01:30 PM - Posted by Eric (not verified)

Looking at the enterprise product at ~$4 per GB and this at ~$2.4 per GB, I really hope there is some further scale in the consumer Optane SSD price.

It seems to me that once SSDs hit 120/128 GB in size historically, most people preferred that over a cache technology.

If Optane SSDs get anywhere under $2GB for a 120/128 GB SSD, I would think the cache option would be dead in it's tracks. Especially if it ever breaks the $199 price point (~$1.55 per GB).

My concern is that since the tech is being kept exclusive to Intel/Micron, there will not be any competition to help drive prices down over time.

March 27, 2017 | 02:14 PM - Posted by Allyn Malventano

Intel is not really going after the power user market with these. The idea is to try and speed up the large numbers of shipping OEM systems that still come with a HDD as primary storage.

March 27, 2017 | 05:11 PM - Posted by Anonymous (not verified)

This still seems like it is of very limited use. It has to be cheaper than using a low end SSD plus hard drive. Although, it is also a simpler configuration for it to appear as one drive. For higher end desktop systems, it will be better to add more DRAM to use as disk cache.

I could see much more use for it in mobile though. It might be lower power to have a smaller amount of DRAM backed by 16 or 32 GB of optane. It could also supply very quick suspend and resume although, I don't know if it would be much faster than using fast flash based device. That would depend on whether resuming is mostly sequential access. It also could be very useful for an HBM based APU. You could have an APU with 8 to 16 GB of HBM and have an optane device as swap. The HBM would effectively act as a cache for the optane memory. It would make a very compact, but powerful system.

March 27, 2017 | 05:53 PM - Posted by Allyn Malventano

DRAM disk cache is not very aggressive and only applies if you close and reopen something fairly quickly. In all of the performance examples that Intel was showing us at Folsom, they compared HDD-only systems with 8GB of RAM to HDD+Optane systems with 4GB of RAM. I believe they are specifically going for the sell to OEMs who might go with less DRAM to offset Optane cost and still increase overall system responsiveness. 4GB still seems tight to me, but most of the average joe use cases still do fine with it, and swaps to disk would be cached much more quickly in Optane than they would to HDD or even SSD.

March 27, 2017 | 09:37 PM - Posted by Anonymous (not verified)

I mostly run Linux and I have a lot of extra DRAM, so I think the DRAM disk cache is very effective. A lot of files are opened and continuously written to while the application is running, or they are opened and closed frequently, like browser cache. I think optane drives is a waste for a consumer desktop system. They could be much more compelling for mobile with a limited amount of memory. They are increasingly soldering the memory, and now even flash chips directly onto the system board. I will not buy a MacBook Pro for that reason. DRAM burns power though, so 4 or 8 GB of DRAM with 16 or 32 GB of optane to back it up may perform well and use less power.

March 28, 2017 | 02:58 AM - Posted by Anonymous (not verified)

You're onto something. Thanks for the comment.

March 27, 2017 | 01:41 PM - Posted by Anonymously Anonymous (not verified)

with such a huge price per GB, the average user will just go directly to the average SSD and call it a day. Seriously, this is just tech that the average user will never care about since the performance and load times aren't that much better than the average SSD.

With OS load times of around 10 - 15 seconds at most, it's a HUGE improvement from an HDD. optane might bring that down to what, 8 seconds, or even 6 seconds, if that low at all?

I think what most people will consider significant enoughof a jump is when you press the power button and the entire OS and desktop is up and ready almost instantaneously. Especially since SSDs for the average user will last YEARS, there is not a good enough reason to spend that much $ for very little returns.

March 27, 2017 | 01:48 PM - Posted by remc86007

With good SATA SSDS, and especially with PCIE SSDS, OS load times are bottlenecked by the CPU in most cases. I doubt you'd see any difference.

March 27, 2017 | 03:12 PM - Posted by Moyeni (not verified)

So, any juicy Intel 860X info yet ? :p

March 27, 2017 | 03:46 PM - Posted by Anonymously Anonymous (not verified)

@Allyn

Theoretically, if Optane can be used in place of RAM and that particular system can have up to 64GB of RAM, would you be able to insert RAM sticks with higher capacity to reach 64GB and still be able to use the other RAM slots for some undefined capacity of Optane?

IN other words, if a system can use optane in the RAM slots and has a limit of 64GBs of RAM, how much of that 64 can be used for Optane?

March 27, 2017 | 03:58 PM - Posted by Allyn Malventano

Optane DIMMs, once they become a thing, will require BIOS-level memory space partitioning so that the system can recognize what is Optane and what is DRAM at boot time. Addressing limits of a given system should also apply to Optane (in total with RAM), but given the special BIOS and validation requirements, you're only likely to see support for that format in server systems, which have higher addressing capabilities anyway. This is the same sort of thing that currently applies to NVDIMM-F.

March 27, 2017 | 04:51 PM - Posted by Anonymously Anonymous (not verified)

ok, that makes sense about the capacity limits.

so that means, in order for optane DIMMs to have any real effect for normal desktop users, we would need the RAM capacity for regular desktops to basically go through the roof. I mean, theoretically if it was possible to do optane DIMMs now with hard limits of the average system of 64GB, even if we only used 8GB of RAM and 46GB of optane, that still isn't enough to have the OS live in it.

Maybe within 4 or 5 years we'll see desktops that have that magical instant on because the OS lives in somethign like optane in DIMM format? wishful thinking on my part :)

March 27, 2017 | 06:07 PM - Posted by Allyn Malventano

Yeah, this is one of those things that would make PC computing radically different if it came about, say, 10 years ago. By now you'd have most things that you previously had to load from disk just executing natively direct from Optane. For now, we can just realize that we are running up against diminishing returns in performance compared to what we have today - NAND SSDs are already reasonably quick, and an Optane consumer SSD would be even quicker at lower latency IO. DIMMs would increase throughput even more and shave off more latency, but an NVMe Optane client SSD is probably going to get you most of the way there, especially compared to HDD. Going appreciably faster in a way you would notice would require an OS architecture change that allowed code to execute straight from XPoint DIMMs.

March 27, 2017 | 06:23 PM - Posted by Anonymous (not verified)

Optane on a DIMM will have to have a much higher durability and endurance than these first generation Optane/XPoint SKUs to justify placing any Optane on the same DIMM with the DRAM, so maybe there will be DIMMs made up of only Optane(Intel's) or QuantX(Micron's) respective XPoint SKUs until XPoint's duraibility/endurance can be greatly improved to a point where it can utilized in any non easily replacable manner different from say for SSDs or XPoint only DIMMs that can be easily replaced. Any DRAM/XPoint combo DIMMs would see the XPoint wear out before the DRAM would on any heavy usage workloads like paging file swaps/etc.

I'd love to see XPoint die/s sharing the same DIMM as any DRAM die/s and making use of a unified DIMM controller that could manage any DRAM die to XPoint die data transfers via a background BUS on the DIMM module. So the OS could write the data to DRAM and issue a command to the DIMM controller to do any DRAM to NVM transfers in the background without taking up any uncessary main CPU/system to DRAM channel bandwidth. Maybe even a JEDEC HBM#/NVM standard that includes XPoint dies on the HBM die stacks along with the DRAM dies. The duraibility/endurance of any XPoint used in such a way is going to have to be some orders of orders of magnitude higher for future XPoint versions before any of these other usage scenarios can be expected.

March 27, 2017 | 08:31 PM - Posted by Isaac Johnson

I am not a 'Fanatic Gamer' nor am I an 'eSport Gamer', will I be allowed to use these as just a 'Gamer'?

March 27, 2017 | 09:10 PM - Posted by Chuck80 (not verified)

Is it reasonable to expect Optane completely taking over the Dimm slots as storage/cache while the job of DRAM being taken over by HBM ?

March 27, 2017 | 10:39 PM - Posted by pessimistic_observer (not verified)

slow down we havent even seen a sample of an soc with hbm.

March 28, 2017 | 03:18 AM - Posted by Anonymous (not verified)

So much hype and we end up with SSHD? Linus says Optane is 50USD. Last 240GB SSD I bought was 55USD.

March 28, 2017 | 12:34 PM - Posted by Anonymous (not verified)

Micron's XPoint, QuantX, for AMD and others and most likely there will be AM4 motherboards with some form of support if Micron takes a more licensed IP approach.

"Micron is apparently taking a path that differs from Intel's though, in that it's looking to license its 3D Xpoint technology to other storage makers (not currently known which), in SSD or DDR-like formats, according to the company." (1)

(1)

"Micron's QuantX-based Products to Ship Late 2017"

https://www.techpowerup.com/231918/microns-quantx-based-products-to-ship...

March 28, 2017 | 09:27 PM - Posted by razor512

Why can't they make a board with a set of DDR3 slots, so that those of us moving from older platforms, can have DDR4 for main system memory, and DDR3 going through the PCH, as a RAM drive for caching, or as a location for virtual memory?

March 29, 2017 | 09:31 AM - Posted by Anonymous (not verified)

Becaue devices like this for DDR1 (Gigabyte iRAM) and DDR2 (Acard ANS-9010b) were not popular?

March 29, 2017 | 01:25 PM - Posted by Paul A. Mitchell (not verified)

> Optane DIMMs, once they become a thing, will require BIOS-level memory space partitioning so that the system can recognize what is Optane and what is DRAM at boot time.

A few years back, we filed a Provisional Patent Application
for a "Format RAM" option in BIOS/UEFI subsystems.

Picture a large server with 1TB of DRAM, then
format the uppermost 64GB for the OS, and
install your OS directly into that ramdisk partition.

The advent of fast NVMe SSDs like the Samsung 960 Pro
made possible a fast AND non-volatile OS to be hosted
on that SSD, using a standard NTFS C: partition.

Enter Optane mounted on the desktop DIMM form factor,
or SODIMM form factor.

What came to my mind was the triple-channel chipsets
with 6 discrete DIMM slots e.g. LGA-1366 chipset.

It is conceivable to allocate 4 of those 6 slots
to modern DDR4 quad-channel mode, and dedicate the
remaining 2 of 6 to non-volatile Optane mounted
on the desktop DIMM and/or SODIMM form factors.

As Allyn has pointed out, this would require
"BIOS-level memory space partitioning", but
this is certainly feasible.

With the proper BIOS/UEFI support, the 2 slots
populated with Optane DIMMs could be formatted
as standard NTFS partitions, and the OS freshly
installed into one of those NTFS partitions.

The beauty of this approach, over my original idea,
is that the Optane DIMMs are non-volatile, whereas
the original idea actually formatted an uppermost
subset of volatile DRAM.

May 3, 2017 | 09:37 AM - Posted by amadsilentthirst

I'd like to know what would happen if you paired Optane with a Green Drive

Say you had a large Green Drive, that goes into sleep mode when not accessed, which is fine when it's a second drive.

What happens when it is the only drive in the system paired with some Optane Cache?
Would the Cache cause it to enter sleep - and then negate the speed increases as it'll have to wait for it to spin up again?

So many questions, just not enough space

May 9, 2017 | 10:54 PM - Posted by MRFS

http://www.legitreviews.com/intel-optane-memory-tested-boot-drive-second...

Intel Optane Memory Tested As Boot Drive, Secondary and RAID 0
Posted by Nathan Kirsch | Tue, May 09, 2017 - 9:42 AM

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.