Review Index:
Feedback

Intel Optane Memory 32GB Review - Faster Than Lightning

Subject: Storage
Manufacturer: Intel

Installation, Timed Tests and Observations

Installation

Intel shipped us a pre-built machine that was meant to match the typical OEM-style build one would expect to receive with Optane Memory installed. The hardware was all 7th generation gear, but not necessarily high end. The CPU was an i5-7500 (3.4GHz quad core - no hyperthreading) and the system relied on Intel’s integrated HD 630 Graphics.

View Full Size

While Optane Memory came pre-configured, the installer was present and we could replicate the typical install process by first disabling Optane Memory and removing the driver.

View Full Size

For anyone familiar with Intel’s older Smart Response Technology, the process is very similar, though Optane Memory is not yet integrated with Intel’s RST driver as SRT was. It serves as a standalone installer and application. With the driver and Optane Memory module installed, enabling is as simple as clicking a button and rebooting the machine.

View Full Size

During the first reboot after enabling, we caught the UEFI BIOS completing an additional step just after POST. This prompt only seems to appear during the enabling process. Once you've booted, that’s it. It’s just that simple. From this point forward you get Optane memory acceleration. Your HDD (or SSD) is renamed to indicate the new hybrid status and the Optane SSD itself no longer appears under disk management.

Timed Tests

The best way to get an idea of how fast Optane Memory ‘feels’ is to go low tech with stopwatch testing. We configured multiple systems identically and ran these tests in the exact same sequence across all configurations. Times are an average of multiple runs wherever possible.

View Full Size

View Full Size

We tried to go a bit heavy on some of the operations we attempted to really push the Optane Memory system as a whole. The general consensus is as follows:

  • SATA HDD speeds are brought to match SATA SSD speeds.
  • SATA SSD speeds are in some (few) cases accelerated even further.
  • Some specific game workloads (level loading in Doom) do not hit the storage in a latency critical manner. While Doom took far longer to launch on an HDD, level loads once in the game were similar regardless of HDD/SSD speeds or caching.

Yes, you read that right, SATA SSDs can be accelerated by Optane Memory and in some cases the end result beats even an NVMe SSD (in this case, the 960 EVO!).

Let’s get a closer look at those boot times:

View Full Size

Optane Memory absolutely excels at small random access at VERY low latencies, and that is a primary contributor to system boot speed. The gains are sufficient to accelerate a HDD system to speeds approaching that of a Samsung 960 EVO system, and in the case of Optane Memory accelerating a SATA SSD, even faster!

Cache Roll-off

A primary concern I had going into this review was that 32GB might not be sufficient to cache the heaviest uses. I was pleasantly surprised, and you should be too considering that I did the following within one session (no reboots):

  • Download and install VMware
  • Download Windows 10 ISO image from Microsoft
  • Install Windows 10 Pro within a VMware VM
  • Play Doom
  • Play Ashes Escalation

After all the above, not only was the Ashes Escalation launch time identical to previous runs, the reboot I performed after this sequence was only a bit slower than the typical time, coming in at 12.89 seconds (still way faster than the 40+ second typical HDD-only time). Since I had passed far more than 32GB through the storage, it is clear that the Optane Memory driver is very smart about what it caches and what it doesn’t.


April 24, 2017 | 01:45 PM - Posted by psuedonymous

Soooo.... Pair one with a 1TB 960 Evo?

April 24, 2017 | 04:53 PM - Posted by Allyn Malventano

Cool idea, but I'd rather put three of these Optane Memory SSDs into a 96GB RAID-0 and just boot from that, mounting the 1TB 960 EVO as a separate game/misc drive. But I'm crazy like that.

April 24, 2017 | 06:15 PM - Posted by getfunk

but would it be better to run just the 960 evo or pair with the Optane?

April 25, 2017 | 02:54 AM - Posted by psuedonymous

Only have the two m.2 slots to play with (ITX), I don't think I could run with only 64Gb of internal storage!

April 29, 2017 | 10:24 PM - Posted by MRFS

> put three of these Optane Memory SSDs into a 96GB RAID-0 and just boot from that

Allyn,

Do you have enough compatible spare parts in your lab
to try the following:

2 x 32GB Optane wired to M.2 slots
with M.2-to-U.2 adapters and cables

fresh install of Windows 10

connect a large Nand Flash SSD
or large HDD later, for dedicated
data storage

Yes, we understand that this setup
would be limited by the DMI 3.0 link; but,
nevertheless I believe many DIY prosumers
would like to know that it was possible.

Are we limited to Kaby Lake CPUs for this
experiment to work at all?

THANKS AGAIN FOR ALL YOU DO!

June 3, 2017 | 11:59 AM - Posted by rcald2000

Allyn, I considered "borrowing" your idea of having a 96GB RAID-0 array for OS, and 1TB 960 EVO for apps/games/data. But I have a dilemma: That combination would require 4 M.2 slots, but 3 slots are the most that I find available on Z270 motherboards. Any suggestion?

June 3, 2017 | 12:10 PM - Posted by rcald2000

The only solution to the lack of M.2 slots would be a riser card plugged into the PCIe x16 slot. I "think" PC Perspective just reviewed a 4 or 8 M.2 card on the last podcast. Would the single 960 EVO go into the slot, or the RAID'd Optane drives? Thank you for reading and potentially responding to my comments.

April 24, 2017 | 02:07 PM - Posted by Xebec

OK, Now that I can visually see the latency difference between L2/L3 cache, and a Floppy Drive.. my life is fulfilled! :)

FYI - missing a few numbers on the first chart of the page covering Optane RAID results

April 24, 2017 | 04:52 PM - Posted by Allyn Malventano

I was trying to keep the focus on the read performance there. The write results appear inverted because NAND SSDs handle burst writes faster than they handle burst reads, but the important factor determining application launch performance falls on the random reads.

April 24, 2017 | 02:05 PM - Posted by Jabbadap

Any news about linux support, could care a less about windows.

April 24, 2017 | 04:47 PM - Posted by Allyn Malventano

I'm sorry to say it, but for desktop, Intel probably could care less about <2% of the market. It is a standard NVMe part though, so feel free to pick one up and code your own caching driver (I hope it fares better than BCache). Coding your own additions is a big benefit of Linux, after all.

April 24, 2017 | 07:03 PM - Posted by josh4trunks

Allyn, I'm wondering what stops this from working as a block device (not using Intel's caching) on an older motherboard that supports PCIe 3.0? In my case a Supermicro X9SCM with an i3-3220.

I was thinking of using it as a ZFS Intent Log (ZIL) for ZFS. I assume my operating system (FreeNAS) would also need a driver for this, as with any NVMe drive?

Thanks

April 24, 2017 | 07:35 PM - Posted by Allyn Malventano

It should be addressable as a block device so long as the OS has NVMe support.

April 24, 2017 | 07:37 PM - Posted by josh4trunks

I was afraid you would say that. OK, guess I gotta pull my credit card out...

April 25, 2017 | 08:23 AM - Posted by Jabbadap

Well they have one of the biggest open source team, they do care about linux. But fair enough, not necessary Desktop.

Michael at phoronix got the confirmation that it should work fine as a standard nvme storage. Heck I think that just means you could install whole linux system on it or mount part of system directories on there(Like replacing quite common speed tweak on linux: mounting /var/tmp on the main memory with tmpfs).

April 24, 2017 | 02:17 PM - Posted by IntelGiant

OH Come on Allyn! :)

We know you've already tested the little blue Optane module with an NVMe SSD.

So Tell US!
So Tell US!
So Tell US!
So Tell US!

Can we buy one and increase our response speeds with a Samsung 960 EVO M.2?

Isn't that what the TWO M.2 slots on ALL Z270 motherboards is ALL ABOUT?

(big smile)

April 24, 2017 | 04:40 PM - Posted by Allyn Malventano

Caching an NVMe SSD with another NVMe SSD gets to the point where overhead becomes an issue, mostly because you are still bottlenecked by DMI throughputs at the end of the day. I'm sure there would be a benefit in some cases, but we were getting to the point of diminishing returns with Optane caching a SATA SSD. You're still limited by NAND flash latencies on a 960 PRO, so QD1 performance while cached by Optane should end up similar to when caching an 850 series (SATA) part.

I also wouldn't recommend that specific configuration because I doubt Intel supports it, so you may run into odd compatibility issues.

April 24, 2017 | 05:21 PM - Posted by IntelGiant

Yea, I can understand Intel locking out all Samsung NVMe M.2 skus.

Do you think "that maybe" the 32GB Optane module will help boost low Q1 to Q4 response time used with an Intel 750 NVMe 400GB drive?

I have one of those also... And compared to the Samsung 960 EVO 250GB M.2 drive, the Intel NVMe could really use a performance boost. : )

I will purchase an Intel 900P client/consumer Optane drive when they are ready for prime time.

http://www.guru3d.com/news-story/intel-working-on-900p-consumer-optane-s...

April 24, 2017 | 05:39 PM - Posted by Allyn Malventano

I wasn't suggesting Intel locked out Samsung SKUs, I was instead referring to the Optane Memory driver / firmware maybe not playing as nicely when attempting to cache NVMe with NVMe.

Yeah it could boost lower QD performance of the SSD 750.

Regarding the 900P, you and me both!

April 25, 2017 | 02:08 AM - Posted by fuersliph

Wait, I need the newest chipset and processor to run Optane, but the only performance increase is with a SATA SSD or HDD?

April 25, 2017 | 04:55 PM - Posted by Allyn Malventano

No, the performance increase happens over a HDD, but we also tested it caching a SATA SSD as a comparison point.

April 24, 2017 | 04:01 PM - Posted by DavidC1

The 16GB will be even lower latency. The 32GB is rated at 9/30us while 16GB is at 7/18us.

I think the reason they gave 32GB to reviewers is because the sequential write is halved to 145MB/s and it might garner unfavorable reviews.

The difference is interesting. Also the lower latency compared to P4800X. Its likely the overhead of putting multiple channels. The throughput suggests 16GB has 1-channel controller while 32GB has 2. P4800X has 7. The 16GB version thus has less overhead exposing more of media level performance.

The write throughput scales nearly linearly with number of channels.

16GB 1 channel: 145MB/s
32GB 2 channel: 290MB/s

April 24, 2017 | 04:36 PM - Posted by Allyn Malventano

1 channel for 16GB and 2 channels for 32GB is most certainly the case considering a single XPoint die is 16GB of capacity :). 4KB random reads on the P4800X would still be only hitting one die at a time. Lower latencies at smaller capacities are likely down to simpler controller design enabled by the fewer channels / less addressable capacity.

April 24, 2017 | 04:32 PM - Posted by quest4glory

Minor nitpick with the charts: I can't tell the difference between your blue line and your, I assume green line. They look the same to me.

April 24, 2017 | 04:37 PM - Posted by Allyn Malventano

If you're talking about the QoD 4KB Random Read, I included a zoomed-in version a few charts down - it better shows the separation in what was heavily overlapped in the first chart.

April 24, 2017 | 04:40 PM - Posted by quest4glory

I was, thank you. That helps.

April 24, 2017 | 05:28 PM - Posted by IntelGiant

Thanks for the amazing news Allyn, you totally ROCK!!!

The best reviewer in storage technology, the community has to offer.

We need more great minds like yours.

: )

The guy over at Anandtech is kinda ok on storage, but he lacks excitement and passion, and he's no where near as smart as Allyn.

April 24, 2017 | 05:46 PM - Posted by zme-ul

can you test is the Optane cache works with Linux distros?
the the cache is somewhat UEFI based, should theoretically work

April 24, 2017 | 06:17 PM - Posted by Jeremy Hellstrom

Possibly, but I know that there have been issues with the Linux kernel and NVMe, if your distro is happy with NVMe it can physically work.  Convincing the OS to use it as a cache might be a different story.

April 24, 2017 | 07:07 PM - Posted by FallenBytes

Trying to differentiate various similar shades of semi-transparent blue/orange on a dark blue background and compare them to the graphed lines on a black background.

Would it be possible to place a black box behind the key in future graphs so it's easier to compare it to the chart and possibly prevent eye strain in some readers?

April 24, 2017 | 07:34 PM - Posted by Allyn Malventano

I feel your pain. Trying to get 9 plot lines on the same chart, where there were three 'brackets' of three results each, well, that's as good as I could make it look. Good idea on the legend thing. I'll try and implement a form of that in the next review using these charts, as well as trying to get more creative with the dash styles to differentiate plot lines even further.

April 24, 2017 | 07:25 PM - Posted by CNote

I'd like to see it with a SSHD to see what happens with having another cache in between the Optane and the HDD.

April 24, 2017 | 07:32 PM - Posted by Allyn Malventano

The Optane benefits are so far ahead of HDD, and bring the results so close to that of a SATA SSD, it will very much look the same as it did when caching the HDD, with the occasional burst of how it looks when caching the SATA SSD (very similar in most timed tests).

April 25, 2017 | 03:32 AM - Posted by seinthebear

"Only took 8 years for that particular pipe dream to cone true."

Spotted a typo, should be "come", otherwise great write! :)

April 25, 2017 | 05:51 AM - Posted by Ryan Shrout

Fixed!

April 25, 2017 | 02:13 PM - Posted by Lucidor

On the subject of typos...
The second graph in the Client Performance section specifies 'ud' as the latency measurement units, when it's clearly supposed to be 'us' and while you're at it, you could even fire up the character map and spoil us with a μ instead of a 'u'.

On the subject of the article...
1) Fantastic work!
2) It's a bummer Optane requires Kaby Lake to run. If it didn't, I'd be enthusiastically badgering my boss to get a handful of the 32GB drives for our web servers to store databases on.

April 25, 2017 | 05:04 PM - Posted by Allyn Malventano

Good catch. Thanks. Fixed.

1) Thanks!
2) It only requires Kaby Lake to function as an storage cache. It's a standard NVMe SSD otherwise. You can even chipset RAID up to three of them if you can find the appropriate Z170 board, but again, no caching.

April 26, 2017 | 01:33 AM - Posted by Lucidor

That could have perhaps been made clearer then.
Every piece of news coverage of Optane I've seen has warned about Optane needing the latest Intel platform, without mentioning that that's only the case if you want to be using it with Intel's caching software.

April 26, 2017 | 05:16 PM - Posted by Kamgusta

Can we expect newer SSD from Intel and Micron to be Hybrid NAND/XPOINT? This could destroy all other SSD players: all the storage capacity of triple stacked Nands with the speed of Xpoint.

Intel just created a MOAM (Mother of all Memories)...

April 26, 2017 | 07:17 PM - Posted by MRFS

Allyn,

Broadcom have announced a "tri-mode" RAID controller,
model 9460-16i with x8 edge connector:

https://www.broadcom.com/products/storage/raid-controllers/megaraid-9460...

I believe the "16i" refers to 4 x SAS connectors @ 4 devices each,
"i"nternal cabling.

But, from what I can decipher, wiring 4 x NVMe SSDs
requires Broadcom's special "U.2 enabler cable"
(which you can view in Broadcom's User Guide for that AIC).

p.s. I'm still looking for an x16 NVMe RAID Controller
that supports modern RAID modes and standard U.2 cables:
that way, we can test M.2 NVMe SSDs using your Syba
2.5" enclosure.

But, I know you're busy. This was merely an FYI.

KEEP UP THE GOOD WORK!

April 27, 2017 | 12:35 AM - Posted by rukur

And servers will be running windows 10 with this :) will the tracking hold back the drive speed ?

April 27, 2017 | 04:26 PM - Posted by deagle50

Can we use this device as a readyboost drive on pre-Kaby Lake systems? And if so, would it even help?

April 27, 2017 | 06:36 PM - Posted by Jeremy Hellstrom

April 27, 2017 | 07:40 PM - Posted by MRFS

Maybe Micron is reading this and won't make the same mistake?
(Let's not hold our breath, however.)

April 30, 2017 | 03:08 PM - Posted by deagle50

I was referring to Windows ReadyBoost, not the Intel caching software. I'm guessing that is still a no?

April 28, 2017 | 07:09 AM - Posted by jihadjoe

Easily the most detailed review of the 32GB Optane. Thanks and kudos!

April 28, 2017 | 04:44 PM - Posted by rcald2000

Allyn, would Optane work as a cache on a Z270 chipset motherboard with an Intel Pentium G4560?

April 29, 2017 | 09:31 PM - Posted by Allyn Malventano

G4560 does not work with Optane Memory. I asked Intel what as special about the 'Core' Kaby Lake parts that allowed Optane to function and that part of my question was not answered.

April 30, 2017 | 01:07 AM - Posted by aaguilar

Hi Allyn,
Any idea how this memory would perform in a SATA M.2 port?
I was looking for a cheap laptop to replace my chromebook, and saw this one in amazon Acer Aspire E 15 E5-575-33BM
thanks

May 1, 2017 | 04:41 PM - Posted by Pieter82

Hi Allyn, awesome review.

The write troughput of the 850 evo with optane get cut in half. I can't find a Qos, 4KB Random Write for the 960 evo or pro.

Do I understand it correctly that the write latency is lower on the 850 with optane than a single 850 or 960 evo?

a 960 evo 500 gb is about the same price as a 850 evo 500gb with 32 gb of optane.

youre advice?

May 9, 2017 | 10:52 PM - Posted by MRFS

http://www.legitreviews.com/intel-optane-memory-tested-boot-drive-second...

Intel Optane Memory Tested As Boot Drive, Secondary and RAID 0
Posted by Nathan Kirsch | Tue, May 09, 2017 - 9:42 AM

January 9, 2018 | 10:20 PM - Posted by Stever (not verified)

Was wondering if you had a 32GB Optane, a NVMe SSD and a SATA HDD. if the NVMe is the boot drive would the SATA HDD speed increase? Would the NVMe speed lower?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.