Review Index:

Intel Optane SSD 905P 960GB NVMe HHHL SSD Review - Bigger XPoint

Subject: Storage
Manufacturer: Intel

Performance Comparisons - Mixed Burst

These are the Mixed Burst results introduced in the Samsung 850 EVO 4TB Review. Some tweaks have been made, namely, QD reduced to a more realistic value of 2. Read bursts have been increased to 400MB each. 'Download' speed remains unchanged.

In an attempt to better represent the true performance of hybrid (SLC+TLC) SSDs and to include some general trace-style testing, I’m trying out a new test methodology. First, all tested SSDs are sequentially filled to near maximum capacity. Then the first 8GB span is preconditioned with 4KB random workload, resulting in the condition called out for in many of Intel’s client SSD testing guides. The idea is that most of the data on an SSD is sequential in nature (installed applications, MP3, video, etc), while some portions of the SSD have been written to in a random fashion (MFT, directory structure, log file updates, other randomly written files, etc). The 8GB figure is reasonably practical since 4KB random writes across the whole drive is not a workload that client SSDs are optimized for (it is reserved for enterprise). We may try larger spans in the future, but for now, we’re sticking with the 8GB random write area.

Using that condition as a base for our workload, we now needed a workload! I wanted to start with some background activity, so I captured a BitTorrent download:

View Full Size

This download was over a saturated 300 Mbit link. While the average download speed was reported as 30 MB/s, the application’s own internal caching meant the writes to disk were more ‘bursty’ in nature. We’re trying to adapt this workload to one that will allow SLC+TLC (caching) SSDs some time to unload their cache between write bursts, so I came to a simple pattern of 40 MB written every 2 seconds. These accesses are more random than sequential, so we will apply it to the designated 8GB span of our pre-conditioned SSD.

Now for the more important part. Since the above ‘download workload’ is a background task that would likely go unnoticed by the user, we also need is a workload that the user *would* be sensitive to. The times where someone really notices their SSD speed is when they are waiting for it to complete a task, and the most common tasks are application and game/level loads. I observed a round of different tasks and came to a 200MB figure for the typical amount of data requested when launching a modern application. Larger games can pull in as much as 2GB (or more), varying with game and level, so we will repeat the 200MB request 10 times during the recorded portion of the run. We will assume 64KB sequential access for this portion of the workload.

Assuming a max Queue Depth of 4 (reasonable for typical desktop apps), we end up with something that looks like this when applied to a couple of SSDs:

View Full Size

The OCZ Trion 150 (left) is able to keep up with the writes (dashed line) throughout the 60 seconds pictured, but note that the read requests occasionally catch it off guard. Apparently, if some SSDs are busy with a relatively small stream of incoming writes, read performance can suffer, which is exactly the sort of thing we are looking for here.

When we applied the same workload to the 4TB 850 EVO (right), we see an extremely consistent and speedy response to all IOs, regardless of if they are writes or reads. The 200MB read bursts are so fast that they all occur within the same second, and none of them spill over due to other delays caused by the simultaneous writes taking place.

Now for the results:

View Full Size

From our Latency Percentile data, we are able to derive the total service time for both reads and writes, and independently show the throughputs seen for both. Remember that these workloads are being applied simultaneously, as to simulate launching apps or games during a 20 MB/s download. The above figures are not simple averages - they represent only the speed *during* each burst. Idle time is not counted.

The important metric here is reads since writes would be in the background in this scenario. Incremental upgrades over the 900P, tracking as we'd expect based on what was seen so far in this review.

View Full Size

Now we are going to focus only on reads, and present some different data. I’ve added up the total service time seen during the 10x 400MB reads that take place during the recorded portion of the test. These figures represent how long you would be sitting there waiting for 4GB of data to be read, but remember this is happening while a download (or another similar background task) is simultaneously writing to the SSD. This metric should closely equate to the 'feel' of using each SSD in a moderate to heavy load. Total read service times should hopefully help you grasp the actual time spent waiting for such a task to complete in the face of background writes taking place.

Despite the faster throughputs, we see that the total time for the read workload remains at 3.4 seconds. No change from the 900P. Also take note that the 970 PRO is only 0.3 seconds slower at this workload.

Video News

May 2, 2018 | 01:04 PM - Posted by Chaitanya (not verified)

Intel really needs to rethink Optane for desktop. Quite a bit of schizophrenia approach on marketing of this product to consumers.

May 2, 2018 | 11:30 PM - Posted by Paul A. Mitchell (not verified)

I have to disagree, if only because Intel did make a prior decision to limit its M.2 Optane SSDs to x2 PCIe 3.0 lanes. However, I believe I saw very recent reports that the newer M.2 Optane controller is smaller and also uses x4 PCIe 3.0 lanes. See, for example, Intel's "enterprise" M.2 Optanes, photographs of which have already started appearing on the Internet. As such, it's only a matter of time before future Intel M.2 Optanes come with larger capacities that are more compatible with desktop designs. Also, keep your eye on upcoming 2.5" U.2 Optane SSDs, because they will integrate quite naturally into the 2.5" bays available in billions of PC chassis. On that point, I was also very happy to see that Icy Dock is now manufacturing a 5.25" enclosure that houses 4 x 2.5" NVMe SSDs:

May 2, 2018 | 11:35 PM - Posted by Paul A. Mitchell (not verified),5509.html

"The performance with our four-drive Optane 900P array is spectacular: the array achieved over 11,000 MB/s at a queue depth (QD) of 16. At QD8, we measured sequential read performance at just over 8,000 MB/s."

May 2, 2018 | 11:41 PM - Posted by Paul A. Mitchell (not verified)

Photos of Enterprise M.2 Optane SSDs are here:

May 2, 2018 | 01:28 PM - Posted by Anonymous### (not verified)


LEDs before SPECs.

I bet this is marketed towards gaming intel consumers on sudoku watch. Which month of this year is this getting obsolete?

May 2, 2018 | 05:19 PM - Posted by Allyn Malventano

The LEDs are admittedly a bit silly, but this does remain the highest performing (all but sequential) and highest endurance client SSD available.

May 2, 2018 | 03:44 PM - Posted by Dark_wizzie

I'll wait until 960gb capacity is under $1000 used AND is in 2.5in form factor. So I'll be waiting a while.

May 2, 2018 | 03:54 PM - Posted by sircod

2.5in form factor for Optane? You mean with slow-ass SATA or non-existent U.2?

May 2, 2018 | 05:04 PM - Posted by Allyn Malventano

U.2 on desktop is a bit of an issue without cases that offer direct airflow across the bottom of the SSD (heatsink area). Drives that draw >10W in U.2 form factor will cook when left in stagnant air. That's if you even have the U.2 port in the first place. If not then you have to get creative with adapters...

May 2, 2018 | 10:12 PM - Posted by dstanding (not verified)

Well regarding the connector, it's pretty straightforward...either 8639 -> 8643, or 8639 -> 8643 -> M.2.

I can't really think of a platform on which NVMe would be reasonable which doesn't have at least an M.2 slot.

May 2, 2018 | 06:05 PM - Posted by dAvid (not verified)

can we have some real world testing please?

e.g. app loading, Windows boot times

May 3, 2018 | 10:10 AM - Posted by Anonymous2 (not verified)

Spoiler: it will be really fast, but will only "feel snappier".

May 3, 2018 | 05:42 PM - Posted by Allyn Malventano

You're at the point of diminishing returns over a 970 / 960 in boot times and most applications loads (see the mixed burst read service time results for that).

May 4, 2018 | 08:25 AM - Posted by luckz (not verified)

According to 900P owner forum posts, it makes things like Regedit search or loading icons instant, and in those regards is a visible improvement over high-end NVMe NAND SSDs.

On the other hand, I have no idea how one would benchmark that beyond the classic 4k random QD1/QD2.

June 7, 2018 | 05:05 AM - Posted by Allyn Malventano

4K random read at low QD is exactly how you test for that, and the Optane parts crush those particular tests. It's just that most typical software hasn't caught up to the potential just yet.

May 2, 2018 | 07:45 PM - Posted by pdjblum

gave up thinking that these ultra fast drives, including the 970 and 960 pro and evos, are worth all that extra cash when i was able to buy the micron 2TB ssd with endurance of 400TB for $318, .159/GB on amazon

i doubt i will notice the difference

here is the link for the pragmatic or poor or both:

hope this helps some of you

May 4, 2018 | 08:26 AM - Posted by luckz (not verified)

It was even on sale for $270 the other day (USA only Rakuten seller).

I have it, and on my ancient board it reaches pretty low 4K QD1/QD2 scores, maybe half of what a modern NVMe SSD would do on a modern board.

May 4, 2018 | 08:37 PM - Posted by pdjblum

maybe you are constrained by sata 2 given you have an old mobo?

that $270 is an insane price

the drive is bare bones, but i have read good things about it to date, and the endurance is solid

May 6, 2018 | 11:52 AM - Posted by luckz (not verified)

I do have SATA3, just not native.

If you check, the best 4K random read anyone got on the Micron 1100 is 26 MB/s (with an average of 20 MB/s). The average for a modern NVMe drive is 50 MB/s, while a 850 Evo manages 38 MB/s average. So it's a good drive for the price, but the competition is 50-100% faster where it matters.

May 2, 2018 | 09:55 PM - Posted by Takeshi7 (not verified)

Why did you only measure "burst" rates and not sustained random I/O? I bet the 905p would smoke the 970 in those cases. Especially if they are close to full.

This review seems very biased.

May 2, 2018 | 10:40 PM - Posted by Allyn Malventano

Well, if you care to read the performance focus page, you'll note that sustained *and* burst results are present, and for Optane SSDs (and non-caching NAND SSDs), the lines overlap, meaning sustained and burst performance are the same. Also, real client usage is not sustained. Does that give SLC caching NAND SSDs an advantage? Yes, but only compared with reviews that don't use realistic workloads, artificially *disadvantaging* those caching SSDs (sustained IO is not a realistic workload). We also don't test at the crazy high QD's that SSDs are typically rated at. Same reason.

Also, there's a whole page of this review dedicated to explaining why we test the way we do.

May 3, 2018 | 01:58 PM - Posted by Takeshi7 (not verified)

So basically you admit that you tested this drive in a way that it wasn't designed for. Intel's own material says it's meant for "high endurance" workloads. Tom's Hardware says "Intel bills the 905P as a workstation product designed to accelerate extended workloads."

burst workloads do not represent this use case, regardless of how "realistic" they are for the average consumer.

May 3, 2018 | 05:48 PM - Posted by Allyn Malventano

It's a high-end gamer-oriented SSD. With LEDs. The 900P (essentially the same product) ships with a free license to a game in the box. Intel's own documentation states it is for "desktop or client workstations". Additionally, workstation workloads operate at similar QD's to desktop, the difference being that workstations see those workloads at a higher frequency / for greater TBW, etc, and they do not see sustained operation at high QD's. Finally, with the burst and sustained results being equal, your assertion that I am testing it in a way it is not designed for is irrelevant (aside from also being false).

May 2, 2018 | 11:14 PM - Posted by Paul A. Mitchell (not verified)

Allyn's expert focus on latency needs to be appreciated together with the raw bandwidth that becomes available by using x16 PCIe slots, as opposed to connecting downstream of Intel's DMI 3.0 link.

From a research point of view, the availability of "bifurcated" x16 slots has now made possible options like the ASRock Ultra Quad M.2 card installed in an AMD Threadripper motherboard (and AICs like it).

(Hey, gals and guys, no need to "dangle the dongle"!)

Similar quad-M.2 add-in cards come withOUT an integrated RAID controller, because the RAID logic is performed directly by an available CPU core.

As such, designers can now choose to populate these add-in cards with M.2 SSDs and/or M.2-to-U.2 adapter cables.

So, picture this feasible setup: 4 x Samsung 970 EVO SSDs installed in an ASRock Ultra Quad M.2 AIC that is plugged into an x16 slot on the ASRock X399M micro-ATX motherboard.

Then, install 2 x Samsung 970 Pro SSDs in two of the three M.2 slots integrated on that same motherboard.

Lastly, sacrifice the third integrated M.2 slot by choosing instead a U.2 cable that connects directly to a U.2 Optane SSD.

If I had the money, I would be buying the required parts tomorrow. Alternatively, I would be shipping some of those parts directly to Allyn, so he could do his expert testing with other parts he already has in his lab.

The really good news is that ASRock Tech Support replied very promptly to my email request for the steps required to configure a RAID-0 array, using their Ultra Quad M.2 card and their X399M motherboard. I immediately forwarded ASRock's detailed instructions to Allyn.

Lastly, put all of the above in the visible future context of PCIe 4.0, which ups the transmission clock to 16 GHz.

What is truly amazing to me, about these recent developments, is that mass storage is now very close to performing at raw speeds comparable to DDR3 and DDR4 DRAM, and withOUT the volatility that comes with DRAM.

May 4, 2018 | 08:30 AM - Posted by luckz (not verified)

You never only gain performance with RAID, so whether it makes any sense at all relies on the workload.

Why M.2 => U.2 => Optane instead of just PCIe HHHL-ing it in?

May 7, 2018 | 03:44 PM - Posted by Paul A. Mitchell (not verified)

Here's one possible answer:,5509.html

May 3, 2018 | 11:37 AM - Posted by ben gods (not verified)

Almost 20 watts and that price.

gold award my azz,lol

NOW, i understand why that PCPerGoldPNG-300.png is looking

more Brownish than gold , Round rim , its has a light tongue.

are this s(h)ite giving gold awards to any mofo company
who approve of your existense.

May 3, 2018 | 05:53 PM - Posted by Allyn Malventano

If you care about 20W for a high-end SSD in a desktop chassis then this product is not for you. High-end GPUs idle at the same power draw that this SSD consumes fully loaded. Also, we dropped it down to gold specifically due to the price and called it out for that in the article multiple times. Maybe read some of those words instead of being so fixated on the award pictures, mmmk?

May 3, 2018 | 06:35 PM - Posted by Paul A. Mitchell (not verified)

I would like to come to Allyn's defense, using my own use case as an example:

First of all, we have really enjoyed the prolonged productivity we have experienced by loading our 14GB website image into a ramdisk. Simple tasks like browsing and indexing are noticeably faster, and they cause no wear on quality DRAM that has a lifetime warranty (we use a Corsair matched quad that cost >$700 brand new).

Here's the rub: the ramdisk software that we chose comes with a feature that SAVES and RESTORES the ramdisk contents during shutdown and startup. As our ramdisk has grown, both the SAVE and RESTORE tasks have naturally required more and more time to complete. This would not be a big deal, except for those days when we are required to RESTART, for one reason or another.

Accordingly, any routine RESTART must first SAVE the ramdisk's contents (as enabled), then the same routine RESTART must then RESTORE the ramdisk's contents from non-volatile storage. Thus, that reading and writing take place TWICE during every routine RESTART.

(Yes, I am aware that we can always disable that ramdisk, to accelerate RESTARTS; but then, re-loading the ramdisk takes a whole lot longer, using that approach.)

The non-volatile storage which SAVEs our ramdisk is reading about 1,900 MB/second. By switching to 4 x Samsung 970 EVO in RAID-0, the same task should be reading about 10,000 MB/second, or FIVE TIMES the speed of our current (aging) workstation. Similarly, 32GB of new DDR4-3200 should read about FOUR TIMES faster than the 16GB of DDR2-800 now in that aging workstation.

(Hey, gals and guys, I am TOTALLY aware that our DDR2-800 is obsolete, but that workstation continues to function perfectly, so why fix what ain't broke? :) As soon as I can afford the large incremental cost, I'll be building a brand new Threadripper workstation.

Hope this helps.

May 4, 2018 | 08:37 AM - Posted by luckz (not verified)

Just make sure to have an offsite (incremental) backup too :D
With 4x RAID-0 you bump the data loss risk up quite a bit.

Otherwise seems sound. Depending on whether you even need that much CPU performance and that many lanes, a consumer Intel chipset (you don't seem to seek ECC?) with integrated graphics could also host a ton of NVMes via its PCIe slots. The i5 8400 is a fraction of the cost of a Threadripper.

As for synchronising the RAM disk to HDD, periodic use of a tool like is also an option.

May 4, 2018 | 01:19 PM - Posted by Paul A. Mitchell (not verified)


Historically speaking, we have had almost zero problems
with several RAID-0 arrays built with 4 x 6G SSDs:
to date, we prefer Samsung and SanDisk wired to an
inexpensive add-in card.

To synchronize our ramdisk with our non-volatile storage,
we use a simple XCOPY sequence:

xcopy E:\folder R:\folder /s/e/v/d
xcopy R:\folder E:\folder /s/e/v/d

That does the job (if you don't mind Command Prompt).

Then, we backup E:\folder with another batch file
that copies updates over a LAN to older PCs.

Those older PCs act as storage servers that we power up
long enough to perform that task, then power them down.

Those storage servers are also an informal experiment
to measure just how long an obsolete PC will work,
with proper care, maintenance and UPS input power.

p.s. I don't usually need much of the discussion about
"random" and "sequential" workloads, for our purposes,
because routine tasks like updating a COPERNIC index
involve both modes of access. For that reason, we prefer
to have both kinds of storage, so that sequential tasks
like drive images can be done with fast sequential drives,
and random tasks can be done with fast random drives.

Our ramdisk software from has been
absolutely fantastic -- the effects on productivity
have been huge. Plus, all that computing using DRAM
has reduced wear on our other storage subsystems:

May 4, 2018 | 10:30 PM - Posted by ConsumerGradeKitIsNotWorkstationReady (not verified)

Ha ha ha Threadripper for a workstation is a joke compared to Epyc/Sp3! Really gamers do not get it about what real workstations are all about and its not about some damn game running some crappy gaming graphics at some stupid FPS. professional Graphics Workstation user whant stability for their many hours long graphics rendering workloads and that's different from consumer/gaming SKUs like TR/X399 MB that are not really tested/certified and vetted for ECC Memory Usage.

Epyc is a Real server/workstation grade CPU/MB ecosystem and Threadripper dos not make the grade for real production workstation workloads.

Stop that madness all you enthusiasts websites with your affiliate code kickback schemes with the consumer marketing divisions of these companies. Trying to foist non Workstation grade hardware for the extra revenues ant the expense of the truth. Epyc is AMD's real server/workstation grade brandng and not any consumer Threadripper/Ryzen non professionally certified/tested and vetted for system stability and error free memory usage. Epyc is the better TRUE workstation price/feature winner against Intel and Against any other consumer/AMD gaming oriented hardware that does nt make the grade for actually professional workstation production workloads.

Threadripper even mintioned in the same article as Workstation is the very epitome of disingenuousness!
Real professionals use real Workstation hardware an AMD's Epyc SKUs are more affordable than Intel's Xeon and the better price/feature deal even compared to Threadripper.
AMD's not Intel so AMD's Real Epyc Workstation/Server Branded parts are so affordable that users are not forced to play at being professional and only able to afford Intel's non workstation grade consumer trash!

May 7, 2018 | 08:19 PM - Posted by Paul A. Mitchell (not verified)

If anyone is interested, ASRock replied to our query with simple instructions for doing a fresh install of Windows 10 to an ASRock Ultra Quad M.2 card installed in an AMD X399 motherboard. We uploaded that .pdf file to the Internet here:

May 10, 2018 | 08:20 PM - Posted by Paul A. Mitchell (not verified)

FYI: comments on ASRock Ultra Quad M.2 AIC:

Reportedly, Intel's Enterprise M.2 Optane uses x4 PCIe lanes:

"From talking to some of our hyper-scale data center contacts, we expect this new Optane m.2 drive to be PCIe x4 and significantly faster than the desktop drives. Perhaps given the DC P4510 and P4511 naming convention this will become the Intel DC P4801X or a new class of drives like a P4601X.

"Still, the continual march of the m.2 form factor in servers, even in the dense OCP server platforms, is ongoing. It is great to see that a proper Intel Optane DC drive is coming to the m.2 slot."

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.