Review Index:

The Samsung Enterprise SSD Roundup - 883 DCT, 983 DCT, 983 ZET (Z-SSD) Tested!

Subject: Storage
Manufacturer: Samsung

Samsung 983 ZET AIC Performance Comparisons vs. Intel Optane DC P4800X 750GB

Alright, after pages and pages of charts, here are a few more, but these are to compare and contrast the new Z-SSD with the Intel Optane SSD DC P4800X. I'm going to present the data and then explain what we are seeing (and more importantly, why). First up we'll compare QoS results:

High-Resolution QoS - Random 4KB Read:

View Full Size

High-Resolution QoS - Random 4KB 70/30 R/W mix:

View Full Size

High-Resolution QoS - Random 4KB Write:

View Full Size

While more of the P4800X's IO's are faster than the Z-SSD, Samsung's parts have a higher performance consistency so long as the queue depth remains low and the requests are not mixed. This is very impressive.

Random IOPS R/W (10% increment) and QD (2^x) sweep:

View Full Size

Sequential R/W (10% increment) and QD (2^x) sweep:

View Full Size

The Optane SSD generally performs better and does so at lower queue depths when compared in terms of average IOPS and sequential throughput.

Write Pressure:

View Full Size

This test was originally developed by Intel, but it constitutes a valid workload (for Optane) and makes a very strong point in favor of Optane parts, so I felt it was worth showing here to get an idea of where something like the Z-SSD falls in this particular workload.

Long story short, it does not do well at all in this test, but that's not to say the Z-SSD is a bad product. NAND flash, regardless of the latency magic Samsung has pulled off with it, still has its limits. In particular is the random write limitation of the Z-SSD, which comes in at less than 100k IOPS. I stop the plot for each SSD on this chart once the drive is unable to keep up with the applied paced random write workload, and that happens at the expected point for the Z-SSD. What is worth noting is that even under the lower random write pressures early in the test, the Z-SSD read latency is climbing much more quickly than the Optane SSD. This is again not a particularly bad thing - it only further backs the type of workload that the Samsung Z-SSD is designed to accelerate. Data analytics typically reads from random locations across a relatively large storage pool. There would not normally be a heavy write pressure taking place during these particular operations, so for that use case, the Z-SSD may be a more economical solution than the Intel Optane part, along with any other similar workloads where there would be minimal writes expected to take place during relatively low QD reads.

Video News

December 14, 2018 | 07:30 PM - Posted by Isaac Johnson

Thanks for doing reviews on stuff like this. Enterprise storage is neat.

December 15, 2018 | 06:47 PM - Posted by James

What exactly are the negatives of the z flash vs. xpoint? It seems like random write would only really occur when you are paging 4k pages out, although even then, they don’t have to be written to a specific location, so it seems like that should look sequential. Most other writes will be the system flushing disk cache out, which seems like it should also look mostly sequential. The other workload would be more real-time data collection type things, which should also be sequential. I am on my phone at the moment, so I can’t go over the graphs in too much detail. Is there issues with sequential write performance on the z flash? When would random write ever really be a performance limitation in secondary storage?

If you are using non-volatile memory on a DRAM memory interface at byte addressable levels, then random write is of massive importance. It is some what unclear to me how much of an advantage xpoint memory connected to DRAM memory bus will be. Will there be competition or is this going to be limited to Intel only systems? I am sure intel would like to keep it for themselves. It would only be useful in certain applications, but there really isn’t any flash based competition to xpoint in a DRAM slot. You could put flash in such a form factor, but can’t be used in the same way.

I have to deal with an ancient nfs server at work, which will hopefully be upgraded soon. I may have to wait until the new year though. I don’t know how much the budget will be. If it is limited to a 64 GB memory machine, then I would definitely want to get SSDs for the storage system. It is a small cluster for build and test, so the budget isn’t that high. The IT guys will be responsible for configuring it though and I am not sure what will be available from the suppliers they use. The actual software builds are the main stress on the machine. It is a large software base and the build system writes out a huge amount of temporary files. We definitely will need enterprise level durability, although the lower end enterprise parts may be fine, especially if the write load is spread across a raid array. My current suspicion is that even if I get them to upgrade more of the network to 10 GbE, the choice of storage system will only make a difference between SSD and spinning rust. That is, a raid of SATA ssd drives would probably be indistinguishable from a fast pci-e/nvme solution. I don’t know if the built on motherboard raid is acceptable in these situations. It will almost certainly be an Intel based system.

December 16, 2018 | 05:00 PM - Posted by Paul A. Mitchell (not verified)

> "there really isn’t any flash based competition to xpoint in a DRAM slot"

I'm very interested in this particular question,
primarily because it's relevant to several related
topics in computer science, if not immediately
relevant to PC prosumers.

My somewhat informed observation about Intel is that
they must have spent tons of money on Optane R&D; and,
their initial marketing programs could have done a
much better job of motivating more rapid user
experimentation and brand loyalty.

For example, coming out of the gate with low capacity
x2 M.2 versions intended for "caching" was a
marketing mistake, imho.

In other words, without having financial statements
in hand to read for myself, I do suspect that Optane
revenues have fallen short of Intel's predictions
for ROI (return on investment).

That said (and I could be wrong about this),
I am now expecting that Intel will do whatever it can
to lock and limit Optane DIMMs to a limited number
of Intel chipsets.

Assuming I am even close to being correct on that point.
the question above is one of great importance to the
future of high-performance computing:

"Is there any flash based competition to XPoint in a DRAM slot?"

Answer: No, unless tier 1 vendors like Samsung
are actively pursuing such competition but
doing so in total secrecy.

Let me give you a real-life example:

I can foresee a niche storage product
that will benefit greatly from
non-volatile DRAM in the SO-DIMM
form factor.

I honestly do not expect that Intel
is planning to manufacture Optane SO-DIMMs
any time soon (e.g. during the next 3 to 5 years).

As such, it order for this niche product
to become a reality, it will need to use
an entirely different Non-Volatile DRAM
technology e.g. Spin-Torque MRAM.

But, where else do we look for alternatives
to Intel's current Optane products?

I really do NOT have any good answers
to the latter question. Yes, there is a
ton of documentation on the potential
of non-volatile DRAM, but all of the
documented alternatives to Optane
appear to be "in development" and/or
needing more research before they
are ready for volume manufacturing
and adequate storage capacity.

And, that latter situation has remained
mostly unchanged for several years.

p.s. Many thanks, once again, for another
excellent review, Allyn.

December 16, 2018 | 05:23 PM - Posted by Paul A. Mitchell (not verified)

Good article here, dated August 16, 2018:

e.g. concerning Intel's joint venture with Micron:

"Clearly, Intel has the resources to go it alone in 3D XPoint. But the question is whether Intel will ever recoup its massive R&D investments with the technology."

This observation goes a long way towards
explaining the MSRPs of current Optane products.

I would argue that Samsung also has the resources
to develop serious competition to all current and
future Optane products e.g. by teaming with AMD?

December 16, 2018 | 05:34 PM - Posted by Paul A. Mitchell (not verified)

Another revealing quote from that same article
re: DRAM-compatible high-capacity device
that will compete with DRAM:


Meanwhile, in R&D, Nantero is developing carbon nanotube RAMs. For embedded apps, Fujitsu is expected to offer the first carbon nanotube RAMs based on Nantero’s technology.

“The strategy is to do embedded memory for logic. Fujitsu will be ramping that in 2019,” Nantero’s Doller said. “In the meantime, what we are working on is a DRAM-compatible high-capacity device. That will compete with DRAM.”

So the next-generation memories are making progress, giving OEMs plenty of options. But they still have a long way to go before they are mainstream devices. They may never reach that point, as DRAM and flash continue to roll along.


December 16, 2018 | 07:53 PM - Posted by NewMarketsCostsMuchSoEaarilyInTheGame (not verified)

Don't count Micron out just because they decided to not go to market with the first generation of XPoint IP. Micron is going to be with 2nd generation XPoint but Intel does have a lead to market at this point. Micron would not be buying out Intel's share of the current Micron/Intel XPoint Fab if Micron did not plan to enter the market place for XPoint.

Intel and Micron will be going their own individual ways after 2nd generation XPoint is ready for market so Micron and Others can enter into agreements if that's Micron's business plan that different from what Intel has planned.

Samsung will be forced into developing some competition to XPoint as Samsung's Z-Nand will only go so far as it can keep up with 2nd generation XPoint. And XPoint competition has forced Samsung to work more SLC NAND into its designs in order to keep up with XPoint's better random R/W access times.

Here is an interesting article from The Memory Guy(1) and Intel has been earning little to no money on first generation XPoint sales. So Maybe Micron is wise to let that market wait for 2nd generation products and let Intel take more risk.


"Intel’s Losses Amid Others’ Gains

Published December 11, 2018 | By Jim Handy"

December 17, 2018 | 03:20 PM - Posted by Allyn Malventano

Z-NAND disadvantages are primarily based on its maximum random write capability, which translates to lower mixed workload capability as well. Compared against 3D XPoint, Z-NAND has a much harder time under heavier / mixed workloads. So long as the workload is relatively lightweight (minimal queue depth, only read *or* write - not both at the same time), it remains closer to 3D XPoint speeds than NAND speeds. The reason for random writes looking as good as they do is partially due to those writes being buffered as they come into the controller, meaning the data is not yet written to flash until a short time later. This is evidenced by the relatively low maximum random write IOPS rating and the immediate climbing of latencies at QD>1. All of that said, Samsung's QoS at those lower loads looks more consistent than the compared P4800X, but that consistency appears to come at a cost, as the Intel part is overall much faster, offering sometimes multiples of raw IOPS performance and latencies roughly 1/2 to 1/3rd of the Z-SSD.

Regarding advantage of XPoint in DIMM form factor, that depends on the software. Intel will likely have a driver layer (Intel Memory Drive Technology) that would enable its use as an extended pool of RAM, and that may help speed up things that were previously overrunning RAM and swapping to disk. The real benefit will come once software is coded to take direct advantage of the media. Software has been the limiting factor for Optane so far really. Standard Windows kernel DMA response times cause a +50% latency penalty to Optane NVMe parts as it is already. Storage needs to be treated differently at a fundamental level to take better advantage of 3D XPoint media.

December 19, 2018 | 04:39 PM - Posted by Paul A. Mitchell (not verified)

> Storage needs to be treated differently at a fundamental level to take better advantage of 3D XPoint media.

Good point, Allyn! ASUS DIMM.2 slots come to mind: at least one such ASUS motherboard supports two of those DIMM.2 slots, each supporting 2 x NVMe SSDs (4 total).

Because of their spatial proximity to the CPU socket, these DIMM.2 slots really deserve low-overhead drivers that minimize access times to that CPU socket.

> Software has been the limiting factor for Optane so far really.

This implies that there are opportunities for a lot more software optimization too.

December 19, 2018 | 06:10 PM - Posted by Allyn Malventano

The DIMM.2 slots are using standard PCIe. It has the same overhead regardless of distance from the CPU (within reason). Drivers won't really help there - not any more than they already do at least. The answer to that latency problem is using the real DIMM slots with the DDR protocol.

December 15, 2018 | 07:35 PM - Posted by Anonymoushghghgh (not verified)

Allyn, you have the best storage reviews of any site.

December 16, 2018 | 02:38 PM - Posted by Anoyingmouse (not verified)

Allyn, do you know why all Samsung's microSDs are having massive price cuts lately? Thanks.

December 17, 2018 | 03:28 PM - Posted by Allyn Malventano

SD media costs have been going down across the board. Might just be some holiday pricing competition.

December 17, 2018 | 07:06 PM - Posted by Wes Baggerly (not verified)

Allyn, did Samsung mention if there would be a SAS, serial attached SCSI, SSD in their Enterprise lineup?

December 19, 2018 | 06:15 PM - Posted by Allyn Malventano

They have had a few SAS models in the past, but none of the new channel products are. I think it's just easier to go SATA for the particular use case that would benefit from it.

December 20, 2018 | 07:29 AM - Posted by Jerrymyr (not verified)

That's a point!

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.