Review Index:
Feedback

The Samsung Enterprise SSD Roundup - 883 DCT, 983 DCT, 983 ZET (Z-SSD) Tested!

Subject: Storage
Manufacturer: Samsung

Conclusion, Pricing, and Final Thoughts

Conclusion

Pros

  • Solid performance
  • Solid build quality
  • Competitive pricing

Cons

  • Z-SSD may need to come down a bit in price to better compete with Intel Optane

View Full Size

Pricing (cost/GB) (endurance / warranty)

  • 860 DCT (2.5" SATA)
    • 960GB - $349.99 ($0.36/GB) (0.2 DWPD 5 yr)*
    • 1.9TB  - $629.99  ($0.33/GB) (0.2 DWPD 5 yr)*
    • 3.8TB  - $1209.99 ($0.32/GB) (0.2 DWPD 5 yr)*
  • 883 DCT (2.5" SATA)
    • 240GB - $109.99 ($0.46/GB) (1.3 DWPD 3 yr)*
    • 480GB - $164.99 ($0.34/GB) (1.3 DWPD 3 yr)*
    • 960GB - $319.99 ($0.33/GB) (1.3 DWPD 3 yr)*
    • 1.9TB -  $639.99  ($0.33/GB) (1.3 DWPD 3 yr)*
    • 3.8TB - $1269.99 ($0.33/GB) (1.3 DWPD 3 yr)*
  • 983 DCT (2.5" NVMe U.2)
    • 960GB - $359.99 ($0.37/GB) (1.3 DWPD 3 yr)*
    • 1.9TB  - $709.99 ($0.37/GB) (1.3 DWPD 3 yr)*
  • 983 DCT (M.2 NVMe 22110)
    • 960GB - $359.99 ($0.37/GB) (1.3 DWPD 3 yr)*
    • 1.9TB  - $709.99 ($0.37/GB) (1.3 DWPD 3 yr)*
  • 983 ZET (HHHL NVMe)
    • 480GB - $1189.99 ($2.48/GB) (8.5 DWPD 5 yr)
    • 960GB - $2379.99 ($2.48/GB) (10 DWPD 5 yr)
  • P4800X (for comparison - current street prices)
    • 375GB - $1100 ($2.93/GB) (30 DWPD 5 yr)
    • 750GB - $2300 ($3.07/GB) (30 DWPD 5 yr)
    • 1.5TB  - $6500 ($4.33/GB) (30 DWPD 5 yr)

*note: Samsung rates the *'d products at full span 4KB random writes. These figures are very conservative and may not be equivalent to competing enterprise NAND product ratings which may be based on flash writes and not host writes (not accounting for write amplification).

First up - those 883 and 983 DCT prices are great! These SSD prices look more like that of consumer parts, not enterprise grade items. Next up is the 983 ZET. While I was analyzing the test results, I was careful to temper its shortcomings with the fact that these are $2.50/GB parts vs. the $4/GB of Optane, and the performance niggles of the Z-SSD were justifiable given the near 50% cost savings. But when I went to look up the prices of the P4800X, I found that they had dropped nearly a third since launch. The 750GB P4800X now comes much closer to the current price of the 983 ZET, which makes the Samsung part harder to justify and only the better choice if you are confident that your particular workload will perfectly match its strengths.

Final Thoughts

I'm thrilled to see Samsung finally bringing their enterprise parts into the channel, and in testing these products, we found that they do not disappoint. Specs were typically met without issue and were frequently exceeded by a fair margin. Quality of Service proved to be outstanding, justifying the rumors that I had frequently heard of in the past. Build quality was as you would expect for enterprise products designed to better handle the tougher enterprise environment. Pricing appears reasonably competitive, though Samsung might need to bring the ZET pricing down a tad to better compete with 3D XPoint. On the topic of the ZET (Z-SSD), it was great to see Samsung work some serious magic with their 3D NAND, turning in the best latency numbers I've ever seen from a flash-based product. Sure the media still has its limits, but for what it is, the Z-SSD is a feat of engineering in its own right. Overall I'm impressed with what Samsung has to offer in the enterprise space and am happy to know that these parts can more easily be obtained by smaller businesses and even workstation users.

View Full Size

I'm awarding Gold across this entire lineup for various reasons. To the 883 and 983 DCT for solid performance at a competitive cost/GB, and to the 983 ZET for accomplishing no small feat of latency reduction while operating within the limitations of NAND flash.

Video News


December 14, 2018 | 07:30 PM - Posted by Isaac Johnson

Thanks for doing reviews on stuff like this. Enterprise storage is neat.

December 15, 2018 | 06:47 PM - Posted by James

What exactly are the negatives of the z flash vs. xpoint? It seems like random write would only really occur when you are paging 4k pages out, although even then, they don’t have to be written to a specific location, so it seems like that should look sequential. Most other writes will be the system flushing disk cache out, which seems like it should also look mostly sequential. The other workload would be more real-time data collection type things, which should also be sequential. I am on my phone at the moment, so I can’t go over the graphs in too much detail. Is there issues with sequential write performance on the z flash? When would random write ever really be a performance limitation in secondary storage?

If you are using non-volatile memory on a DRAM memory interface at byte addressable levels, then random write is of massive importance. It is some what unclear to me how much of an advantage xpoint memory connected to DRAM memory bus will be. Will there be competition or is this going to be limited to Intel only systems? I am sure intel would like to keep it for themselves. It would only be useful in certain applications, but there really isn’t any flash based competition to xpoint in a DRAM slot. You could put flash in such a form factor, but can’t be used in the same way.

I have to deal with an ancient nfs server at work, which will hopefully be upgraded soon. I may have to wait until the new year though. I don’t know how much the budget will be. If it is limited to a 64 GB memory machine, then I would definitely want to get SSDs for the storage system. It is a small cluster for build and test, so the budget isn’t that high. The IT guys will be responsible for configuring it though and I am not sure what will be available from the suppliers they use. The actual software builds are the main stress on the machine. It is a large software base and the build system writes out a huge amount of temporary files. We definitely will need enterprise level durability, although the lower end enterprise parts may be fine, especially if the write load is spread across a raid array. My current suspicion is that even if I get them to upgrade more of the network to 10 GbE, the choice of storage system will only make a difference between SSD and spinning rust. That is, a raid of SATA ssd drives would probably be indistinguishable from a fast pci-e/nvme solution. I don’t know if the built on motherboard raid is acceptable in these situations. It will almost certainly be an Intel based system.

December 16, 2018 | 05:00 PM - Posted by Paul A. Mitchell (not verified)

> "there really isn’t any flash based competition to xpoint in a DRAM slot"

I'm very interested in this particular question,
primarily because it's relevant to several related
topics in computer science, if not immediately
relevant to PC prosumers.

My somewhat informed observation about Intel is that
they must have spent tons of money on Optane R&D; and,
their initial marketing programs could have done a
much better job of motivating more rapid user
experimentation and brand loyalty.

For example, coming out of the gate with low capacity
x2 M.2 versions intended for "caching" was a
marketing mistake, imho.

In other words, without having financial statements
in hand to read for myself, I do suspect that Optane
revenues have fallen short of Intel's predictions
for ROI (return on investment).

That said (and I could be wrong about this),
I am now expecting that Intel will do whatever it can
to lock and limit Optane DIMMs to a limited number
of Intel chipsets.

Assuming I am even close to being correct on that point.
the question above is one of great importance to the
future of high-performance computing:

"Is there any flash based competition to XPoint in a DRAM slot?"

Answer: No, unless tier 1 vendors like Samsung
are actively pursuing such competition but
doing so in total secrecy.

Let me give you a real-life example:

I can foresee a niche storage product
that will benefit greatly from
non-volatile DRAM in the SO-DIMM
form factor.

I honestly do not expect that Intel
is planning to manufacture Optane SO-DIMMs
any time soon (e.g. during the next 3 to 5 years).

As such, it order for this niche product
to become a reality, it will need to use
an entirely different Non-Volatile DRAM
technology e.g. Spin-Torque MRAM.

But, where else do we look for alternatives
to Intel's current Optane products?

I really do NOT have any good answers
to the latter question. Yes, there is a
ton of documentation on the potential
of non-volatile DRAM, but all of the
documented alternatives to Optane
appear to be "in development" and/or
needing more research before they
are ready for volume manufacturing
and adequate storage capacity.

And, that latter situation has remained
mostly unchanged for several years.

p.s. Many thanks, once again, for another
excellent review, Allyn.

December 16, 2018 | 05:23 PM - Posted by Paul A. Mitchell (not verified)

Good article here, dated August 16, 2018:

https://semiengineering.com/next-gen-memory-ramping-up/

e.g. concerning Intel's joint venture with Micron:

"Clearly, Intel has the resources to go it alone in 3D XPoint. But the question is whether Intel will ever recoup its massive R&D investments with the technology."

This observation goes a long way towards
explaining the MSRPs of current Optane products.

I would argue that Samsung also has the resources
to develop serious competition to all current and
future Optane products e.g. by teaming with AMD?

December 16, 2018 | 05:34 PM - Posted by Paul A. Mitchell (not verified)

Another revealing quote from that same article
re: DRAM-compatible high-capacity device
that will compete with DRAM:

https://semiengineering.com/next-gen-memory-ramping-up/

[BEGIN QUOTE]

Meanwhile, in R&D, Nantero is developing carbon nanotube RAMs. For embedded apps, Fujitsu is expected to offer the first carbon nanotube RAMs based on Nantero’s technology.

“The strategy is to do embedded memory for logic. Fujitsu will be ramping that in 2019,” Nantero’s Doller said. “In the meantime, what we are working on is a DRAM-compatible high-capacity device. That will compete with DRAM.”

So the next-generation memories are making progress, giving OEMs plenty of options. But they still have a long way to go before they are mainstream devices. They may never reach that point, as DRAM and flash continue to roll along.

[END QUOTE]

December 16, 2018 | 07:53 PM - Posted by NewMarketsCostsMuchSoEaarilyInTheGame (not verified)

Don't count Micron out just because they decided to not go to market with the first generation of XPoint IP. Micron is going to be with 2nd generation XPoint but Intel does have a lead to market at this point. Micron would not be buying out Intel's share of the current Micron/Intel XPoint Fab if Micron did not plan to enter the market place for XPoint.

Intel and Micron will be going their own individual ways after 2nd generation XPoint is ready for market so Micron and Others can enter into agreements if that's Micron's business plan that different from what Intel has planned.

Samsung will be forced into developing some competition to XPoint as Samsung's Z-Nand will only go so far as it can keep up with 2nd generation XPoint. And XPoint competition has forced Samsung to work more SLC NAND into its designs in order to keep up with XPoint's better random R/W access times.

Here is an interesting article from The Memory Guy(1) and Intel has been earning little to no money on first generation XPoint sales. So Maybe Micron is wise to let that market wait for 2nd generation products and let Intel take more risk.

(1)

"Intel’s Losses Amid Others’ Gains

Published December 11, 2018 | By Jim Handy"

https://thememoryguy.com/intels-losses-amid-others-gains/

December 17, 2018 | 03:20 PM - Posted by Allyn Malventano

Z-NAND disadvantages are primarily based on its maximum random write capability, which translates to lower mixed workload capability as well. Compared against 3D XPoint, Z-NAND has a much harder time under heavier / mixed workloads. So long as the workload is relatively lightweight (minimal queue depth, only read *or* write - not both at the same time), it remains closer to 3D XPoint speeds than NAND speeds. The reason for random writes looking as good as they do is partially due to those writes being buffered as they come into the controller, meaning the data is not yet written to flash until a short time later. This is evidenced by the relatively low maximum random write IOPS rating and the immediate climbing of latencies at QD>1. All of that said, Samsung's QoS at those lower loads looks more consistent than the compared P4800X, but that consistency appears to come at a cost, as the Intel part is overall much faster, offering sometimes multiples of raw IOPS performance and latencies roughly 1/2 to 1/3rd of the Z-SSD.

Regarding advantage of XPoint in DIMM form factor, that depends on the software. Intel will likely have a driver layer (Intel Memory Drive Technology) that would enable its use as an extended pool of RAM, and that may help speed up things that were previously overrunning RAM and swapping to disk. The real benefit will come once software is coded to take direct advantage of the media. Software has been the limiting factor for Optane so far really. Standard Windows kernel DMA response times cause a +50% latency penalty to Optane NVMe parts as it is already. Storage needs to be treated differently at a fundamental level to take better advantage of 3D XPoint media.

December 19, 2018 | 04:39 PM - Posted by Paul A. Mitchell (not verified)

> Storage needs to be treated differently at a fundamental level to take better advantage of 3D XPoint media.

Good point, Allyn! ASUS DIMM.2 slots come to mind: at least one such ASUS motherboard supports two of those DIMM.2 slots, each supporting 2 x NVMe SSDs (4 total).

Because of their spatial proximity to the CPU socket, these DIMM.2 slots really deserve low-overhead drivers that minimize access times to that CPU socket.

> Software has been the limiting factor for Optane so far really.

This implies that there are opportunities for a lot more software optimization too.

December 19, 2018 | 06:10 PM - Posted by Allyn Malventano

The DIMM.2 slots are using standard PCIe. It has the same overhead regardless of distance from the CPU (within reason). Drivers won't really help there - not any more than they already do at least. The answer to that latency problem is using the real DIMM slots with the DDR protocol.

December 15, 2018 | 07:35 PM - Posted by Anonymoushghghgh (not verified)

Allyn, you have the best storage reviews of any site.

December 16, 2018 | 02:38 PM - Posted by Anoyingmouse (not verified)

Allyn, do you know why all Samsung's microSDs are having massive price cuts lately? Thanks.

December 17, 2018 | 03:28 PM - Posted by Allyn Malventano

SD media costs have been going down across the board. Might just be some holiday pricing competition.

December 17, 2018 | 07:06 PM - Posted by Wes Baggerly (not verified)

Allyn, did Samsung mention if there would be a SAS, serial attached SCSI, SSD in their Enterprise lineup?

December 19, 2018 | 06:15 PM - Posted by Allyn Malventano

They have had a few SAS models in the past, but none of the new channel products are. I think it's just easier to go SATA for the particular use case that would benefit from it.

December 20, 2018 | 07:29 AM - Posted by Jerrymyr (not verified)

That's a point!

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.