Review Index:

Intel Optane Memory 32GB Review - Faster Than Lightning

Subject: Storage
Manufacturer: Intel

Conclusion, Pricing, and Final Thoughts



  • Simple, no-frills installation process
  • Outstanding performance gains, especially to HDD-equipped systems
  • It's the fastest (lowest latency) thing we've ever tested. Period. (yes, again)


  • Requires current generation motherboard, preventing upgrades to older systems
  • Cost (see below)
  • Too small to use as a boot SSD!
    • (clearly not its intended purpose, but you can RAID them!)

View Full Size


  • Optane Memory 16GB: $44 ($2.75/GB) (Amazon)
  • Optane Memory 32GB: $77 ($2.40/GB)

These prices are expensive for the capacity you are getting, but the performance benefits alone more than make up for the cost. Further, since a little gets you a long way in terms of cache performance, even 32GB of it still comes in at less than $100.

Final Thoughts:

I knew XPoint was going to enable all sorts of performance gains to storage systems, but I honestly did not expect Optane Memory to net such large benefits and to do it so well. While previous hybrid/caching technologies have been decent, the outstanding latency of XPoint enables an Optane Memory cache to boost HDD systems to meet or even exceed the performance of NAND-SSD equipped machines! While the intended market is clearly as an upgrade to HDD-only systems, measurable benefits can be seen even when caching a SATA SSD. When paired with a budget SATA SSD, we saw boot times cut in half and coming in nearly a second faster than a NAND-NVMe SSD! Overall I am extremely impressed with the performance benefits and implementation of Optane Memory.

View Full Size

This is an outstanding product. My only wish is that support is added to prior generation chipsets.

Video News

April 24, 2017 | 01:45 PM - Posted by psuedonymous

Soooo.... Pair one with a 1TB 960 Evo?

April 24, 2017 | 04:53 PM - Posted by Allyn Malventano

Cool idea, but I'd rather put three of these Optane Memory SSDs into a 96GB RAID-0 and just boot from that, mounting the 1TB 960 EVO as a separate game/misc drive. But I'm crazy like that.

April 24, 2017 | 06:15 PM - Posted by getfunk

but would it be better to run just the 960 evo or pair with the Optane?

April 25, 2017 | 02:54 AM - Posted by psuedonymous

Only have the two m.2 slots to play with (ITX), I don't think I could run with only 64Gb of internal storage!

April 29, 2017 | 10:24 PM - Posted by MRFS

> put three of these Optane Memory SSDs into a 96GB RAID-0 and just boot from that


Do you have enough compatible spare parts in your lab
to try the following:

2 x 32GB Optane wired to M.2 slots
with M.2-to-U.2 adapters and cables

fresh install of Windows 10

connect a large Nand Flash SSD
or large HDD later, for dedicated
data storage

Yes, we understand that this setup
would be limited by the DMI 3.0 link; but,
nevertheless I believe many DIY prosumers
would like to know that it was possible.

Are we limited to Kaby Lake CPUs for this
experiment to work at all?


June 3, 2017 | 11:59 AM - Posted by rcald2000

Allyn, I considered "borrowing" your idea of having a 96GB RAID-0 array for OS, and 1TB 960 EVO for apps/games/data. But I have a dilemma: That combination would require 4 M.2 slots, but 3 slots are the most that I find available on Z270 motherboards. Any suggestion?

June 3, 2017 | 12:10 PM - Posted by rcald2000

The only solution to the lack of M.2 slots would be a riser card plugged into the PCIe x16 slot. I "think" PC Perspective just reviewed a 4 or 8 M.2 card on the last podcast. Would the single 960 EVO go into the slot, or the RAID'd Optane drives? Thank you for reading and potentially responding to my comments.

April 24, 2017 | 02:07 PM - Posted by Xebec

OK, Now that I can visually see the latency difference between L2/L3 cache, and a Floppy Drive.. my life is fulfilled! :)

FYI - missing a few numbers on the first chart of the page covering Optane RAID results

April 24, 2017 | 04:52 PM - Posted by Allyn Malventano

I was trying to keep the focus on the read performance there. The write results appear inverted because NAND SSDs handle burst writes faster than they handle burst reads, but the important factor determining application launch performance falls on the random reads.

April 24, 2017 | 02:05 PM - Posted by Jabbadap

Any news about linux support, could care a less about windows.

April 24, 2017 | 04:47 PM - Posted by Allyn Malventano

I'm sorry to say it, but for desktop, Intel probably could care less about <2% of the market. It is a standard NVMe part though, so feel free to pick one up and code your own caching driver (I hope it fares better than BCache). Coding your own additions is a big benefit of Linux, after all.

April 24, 2017 | 07:03 PM - Posted by josh4trunks

Allyn, I'm wondering what stops this from working as a block device (not using Intel's caching) on an older motherboard that supports PCIe 3.0? In my case a Supermicro X9SCM with an i3-3220.

I was thinking of using it as a ZFS Intent Log (ZIL) for ZFS. I assume my operating system (FreeNAS) would also need a driver for this, as with any NVMe drive?


April 24, 2017 | 07:35 PM - Posted by Allyn Malventano

It should be addressable as a block device so long as the OS has NVMe support.

April 24, 2017 | 07:37 PM - Posted by josh4trunks

I was afraid you would say that. OK, guess I gotta pull my credit card out...

April 25, 2017 | 08:23 AM - Posted by Jabbadap

Well they have one of the biggest open source team, they do care about linux. But fair enough, not necessary Desktop.

Michael at phoronix got the confirmation that it should work fine as a standard nvme storage. Heck I think that just means you could install whole linux system on it or mount part of system directories on there(Like replacing quite common speed tweak on linux: mounting /var/tmp on the main memory with tmpfs).

April 24, 2017 | 02:17 PM - Posted by IntelGiant

OH Come on Allyn! :)

We know you've already tested the little blue Optane module with an NVMe SSD.

So Tell US!
So Tell US!
So Tell US!
So Tell US!

Can we buy one and increase our response speeds with a Samsung 960 EVO M.2?

Isn't that what the TWO M.2 slots on ALL Z270 motherboards is ALL ABOUT?

(big smile)

April 24, 2017 | 04:40 PM - Posted by Allyn Malventano

Caching an NVMe SSD with another NVMe SSD gets to the point where overhead becomes an issue, mostly because you are still bottlenecked by DMI throughputs at the end of the day. I'm sure there would be a benefit in some cases, but we were getting to the point of diminishing returns with Optane caching a SATA SSD. You're still limited by NAND flash latencies on a 960 PRO, so QD1 performance while cached by Optane should end up similar to when caching an 850 series (SATA) part.

I also wouldn't recommend that specific configuration because I doubt Intel supports it, so you may run into odd compatibility issues.

April 24, 2017 | 05:21 PM - Posted by IntelGiant

Yea, I can understand Intel locking out all Samsung NVMe M.2 skus.

Do you think "that maybe" the 32GB Optane module will help boost low Q1 to Q4 response time used with an Intel 750 NVMe 400GB drive?

I have one of those also... And compared to the Samsung 960 EVO 250GB M.2 drive, the Intel NVMe could really use a performance boost. : )

I will purchase an Intel 900P client/consumer Optane drive when they are ready for prime time.

April 24, 2017 | 05:39 PM - Posted by Allyn Malventano

I wasn't suggesting Intel locked out Samsung SKUs, I was instead referring to the Optane Memory driver / firmware maybe not playing as nicely when attempting to cache NVMe with NVMe.

Yeah it could boost lower QD performance of the SSD 750.

Regarding the 900P, you and me both!

April 25, 2017 | 02:08 AM - Posted by fuersliph

Wait, I need the newest chipset and processor to run Optane, but the only performance increase is with a SATA SSD or HDD?

April 25, 2017 | 04:55 PM - Posted by Allyn Malventano

No, the performance increase happens over a HDD, but we also tested it caching a SATA SSD as a comparison point.

April 24, 2017 | 04:01 PM - Posted by DavidC1

The 16GB will be even lower latency. The 32GB is rated at 9/30us while 16GB is at 7/18us.

I think the reason they gave 32GB to reviewers is because the sequential write is halved to 145MB/s and it might garner unfavorable reviews.

The difference is interesting. Also the lower latency compared to P4800X. Its likely the overhead of putting multiple channels. The throughput suggests 16GB has 1-channel controller while 32GB has 2. P4800X has 7. The 16GB version thus has less overhead exposing more of media level performance.

The write throughput scales nearly linearly with number of channels.

16GB 1 channel: 145MB/s
32GB 2 channel: 290MB/s

April 24, 2017 | 04:36 PM - Posted by Allyn Malventano

1 channel for 16GB and 2 channels for 32GB is most certainly the case considering a single XPoint die is 16GB of capacity :). 4KB random reads on the P4800X would still be only hitting one die at a time. Lower latencies at smaller capacities are likely down to simpler controller design enabled by the fewer channels / less addressable capacity.

April 24, 2017 | 04:32 PM - Posted by quest4glory

Minor nitpick with the charts: I can't tell the difference between your blue line and your, I assume green line. They look the same to me.

April 24, 2017 | 04:37 PM - Posted by Allyn Malventano

If you're talking about the QoD 4KB Random Read, I included a zoomed-in version a few charts down - it better shows the separation in what was heavily overlapped in the first chart.

April 24, 2017 | 04:40 PM - Posted by quest4glory

I was, thank you. That helps.

April 24, 2017 | 05:28 PM - Posted by IntelGiant

Thanks for the amazing news Allyn, you totally ROCK!!!

The best reviewer in storage technology, the community has to offer.

We need more great minds like yours.

: )

The guy over at Anandtech is kinda ok on storage, but he lacks excitement and passion, and he's no where near as smart as Allyn.

April 24, 2017 | 05:46 PM - Posted by zme-ul

can you test is the Optane cache works with Linux distros?
the the cache is somewhat UEFI based, should theoretically work

April 24, 2017 | 06:17 PM - Posted by Jeremy Hellstrom

Possibly, but I know that there have been issues with the Linux kernel and NVMe, if your distro is happy with NVMe it can physically work.  Convincing the OS to use it as a cache might be a different story.

April 24, 2017 | 07:07 PM - Posted by FallenBytes

Trying to differentiate various similar shades of semi-transparent blue/orange on a dark blue background and compare them to the graphed lines on a black background.

Would it be possible to place a black box behind the key in future graphs so it's easier to compare it to the chart and possibly prevent eye strain in some readers?

April 24, 2017 | 07:34 PM - Posted by Allyn Malventano

I feel your pain. Trying to get 9 plot lines on the same chart, where there were three 'brackets' of three results each, well, that's as good as I could make it look. Good idea on the legend thing. I'll try and implement a form of that in the next review using these charts, as well as trying to get more creative with the dash styles to differentiate plot lines even further.

April 24, 2017 | 07:25 PM - Posted by CNote

I'd like to see it with a SSHD to see what happens with having another cache in between the Optane and the HDD.

April 24, 2017 | 07:32 PM - Posted by Allyn Malventano

The Optane benefits are so far ahead of HDD, and bring the results so close to that of a SATA SSD, it will very much look the same as it did when caching the HDD, with the occasional burst of how it looks when caching the SATA SSD (very similar in most timed tests).

April 25, 2017 | 03:32 AM - Posted by seinthebear

"Only took 8 years for that particular pipe dream to cone true."

Spotted a typo, should be "come", otherwise great write! :)

April 25, 2017 | 05:51 AM - Posted by Ryan Shrout


April 25, 2017 | 02:13 PM - Posted by Lucidor

On the subject of typos...
The second graph in the Client Performance section specifies 'ud' as the latency measurement units, when it's clearly supposed to be 'us' and while you're at it, you could even fire up the character map and spoil us with a μ instead of a 'u'.

On the subject of the article...
1) Fantastic work!
2) It's a bummer Optane requires Kaby Lake to run. If it didn't, I'd be enthusiastically badgering my boss to get a handful of the 32GB drives for our web servers to store databases on.

April 25, 2017 | 05:04 PM - Posted by Allyn Malventano

Good catch. Thanks. Fixed.

1) Thanks!
2) It only requires Kaby Lake to function as an storage cache. It's a standard NVMe SSD otherwise. You can even chipset RAID up to three of them if you can find the appropriate Z170 board, but again, no caching.

April 26, 2017 | 01:33 AM - Posted by Lucidor

That could have perhaps been made clearer then.
Every piece of news coverage of Optane I've seen has warned about Optane needing the latest Intel platform, without mentioning that that's only the case if you want to be using it with Intel's caching software.

April 26, 2017 | 05:16 PM - Posted by Kamgusta

Can we expect newer SSD from Intel and Micron to be Hybrid NAND/XPOINT? This could destroy all other SSD players: all the storage capacity of triple stacked Nands with the speed of Xpoint.

Intel just created a MOAM (Mother of all Memories)...

April 26, 2017 | 07:17 PM - Posted by MRFS


Broadcom have announced a "tri-mode" RAID controller,
model 9460-16i with x8 edge connector:

I believe the "16i" refers to 4 x SAS connectors @ 4 devices each,
"i"nternal cabling.

But, from what I can decipher, wiring 4 x NVMe SSDs
requires Broadcom's special "U.2 enabler cable"
(which you can view in Broadcom's User Guide for that AIC).

p.s. I'm still looking for an x16 NVMe RAID Controller
that supports modern RAID modes and standard U.2 cables:
that way, we can test M.2 NVMe SSDs using your Syba
2.5" enclosure.

But, I know you're busy. This was merely an FYI.


April 27, 2017 | 12:35 AM - Posted by rukur

And servers will be running windows 10 with this :) will the tracking hold back the drive speed ?

April 27, 2017 | 04:26 PM - Posted by deagle50

Can we use this device as a readyboost drive on pre-Kaby Lake systems? And if so, would it even help?

April 27, 2017 | 06:36 PM - Posted by Jeremy Hellstrom

April 27, 2017 | 07:40 PM - Posted by MRFS

Maybe Micron is reading this and won't make the same mistake?
(Let's not hold our breath, however.)

April 30, 2017 | 03:08 PM - Posted by deagle50

I was referring to Windows ReadyBoost, not the Intel caching software. I'm guessing that is still a no?

April 28, 2017 | 07:09 AM - Posted by jihadjoe

Easily the most detailed review of the 32GB Optane. Thanks and kudos!

April 28, 2017 | 04:44 PM - Posted by rcald2000

Allyn, would Optane work as a cache on a Z270 chipset motherboard with an Intel Pentium G4560?

April 29, 2017 | 09:31 PM - Posted by Allyn Malventano

G4560 does not work with Optane Memory. I asked Intel what as special about the 'Core' Kaby Lake parts that allowed Optane to function and that part of my question was not answered.

April 30, 2017 | 01:07 AM - Posted by aaguilar

Hi Allyn,
Any idea how this memory would perform in a SATA M.2 port?
I was looking for a cheap laptop to replace my chromebook, and saw this one in amazon Acer Aspire E 15 E5-575-33BM

May 1, 2017 | 04:41 PM - Posted by Pieter82

Hi Allyn, awesome review.

The write troughput of the 850 evo with optane get cut in half. I can't find a Qos, 4KB Random Write for the 960 evo or pro.

Do I understand it correctly that the write latency is lower on the 850 with optane than a single 850 or 960 evo?

a 960 evo 500 gb is about the same price as a 850 evo 500gb with 32 gb of optane.

youre advice?

May 9, 2017 | 10:52 PM - Posted by MRFS

Intel Optane Memory Tested As Boot Drive, Secondary and RAID 0
Posted by Nathan Kirsch | Tue, May 09, 2017 - 9:42 AM

January 9, 2018 | 10:20 PM - Posted by Stever (not verified)

Was wondering if you had a 32GB Optane, a NVMe SSD and a SATA HDD. if the NVMe is the boot drive would the SATA HDD speed increase? Would the NVMe speed lower?

December 3, 2018 | 02:04 AM - Posted by Anonymous3653885 (not verified)

I need to buy a new laptop (for work usage) and I am shopping for different models. Can someone please tell me which is the fastest to load applications and to run modelling programs such as AutoCAD, STAAD.Pro etc??

1. SSHD (Seagate FireCuda)
2. HDD + Intel Optane
3. SSHD (Seagate Firecuda) + Intel Optane
4. Storage HDD + M.2 SSD drive for OS partition and applications
5. Intel Optane in m.2 slot + 2.5 inch SATA SSD in storage slot
6. SATA SSD 2.5 inch as primary storage and empty/unused m.2 slot
7. SSD in m.2 slot and empty/unused slot for the 2.5 inch storage.

I already have a laptop for personal use and that’s running an ordinary HDD, it’s enough for me as all I do is check my email or watch movies. However for my work laptop I want a storage which can open and run applications quite fast. I’m confused if I should go for (HDD + Optane) or (SSD + Optane) or use only an SSD?? Will Optane give good results when combined with a 2.5 inch SATA SSD??

NOTE: All SSDs (m.2 or 2.5 inch SATA) will be Samsung.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.