Review Index:
Feedback

V-NAND Showdown - Samsung 850 EVO V1 and V2 Compared - 32 (V2) vs. 48 (V3) Layer Flash

Subject: Storage
Manufacturer: Samsung

Latency Percentile and Conclusion

Latency Percentile

Now for the fun part. Latency Percentile testing was introduced in our 950 Pro review, and has come in very handy identifying how performance differences impact the ‘feel’ of a system. With all tested SSDs undergoing identical pre-conditioning, I ran them through a custom test sequence to extract Latency Percentile data at varying queue depths. The same sequence and conditioning were applied to all three tested SSDs.

Writes:

First lets look at the default zoom:

View Full Size

Apparently we're going to need to magnify things a bit before we begin to explain:

View Full Size

I had to spread the read charts out below due to excessive overlap between data sets, but I was able to get all of the writes on a single chart since the plotted data swept neatly from left to right.

  • QD=1
    • The 850 EVO V2 (cyan) comes in first here.
    • The 850 EVO V1 (orange) comes in on average ~2us more latent than the V1, but matches the 850 PRO (gray) result almost identically.
  • QD=2
    • The 850 EVO V2 (yellow) again leads here, this time by ~3us, which actually pushes its result close to the 850 PRO/850 EVO V1 results @ QD=1.
    • The 850 EVO V1 (blue) is actually slightly bested by the 850 PRO (green) at QD=2.
  • QD=4
    • All three results are extremely close here, but we can see the 850 EVO V1 and 850 PRO both taper off sooner while the 850 EVO V2 holds its latency near vertical all the way to the 99th percentile (where the V2 leads by 2.6us).
  • QD=8
    • At this high of a load, the 850 PRO finally takes the lead (by ~1us average), with the 850 EVO V1 and V2 running very close together. That said, the IOPS (far right) of the EVO V2 remains very close to the PRO.

The take away here is that at low loads (typical for consumer), the new 850 EVO V2 is not only faster to respond to random write IO than the 850 EVO V1, it also beats the 850 PRO by a healthy margin.

Read:

For reads the same 4k random workload applied, but since read workload latencies partially overlap at the queue depths tested, I’ve separated each QD into its own chart. Before getting into the data, I’ll first explain that latencies shift to the right (longer) for reads as compared to writes. This is because an SSD can receive and acknowledge data from a host faster than it can respond to a read request. Responding to a read means going all the way to the flash (via the Flash Translation Layer), fetching the requested data, and transferring that data to the host. Writes are quicker from the host’s perspective as the SSD simply receives the data (technically completing the IO) and figures out where to put it after the fact. Now onto the results:

QD=1

View Full Size

  • We see the familiar 'stepped' read latency profile of Samsung's controller / flash combination (seen at the bottom half of the page here). The profiles may be the same between both EVOs, but the new V2 is shifted neatly 8.7us to the left, which also pushes it faster than the 850 PRO for 67% of all IO's in this test run.

QD=2

View Full Size

  • The 850 EVO V2 and 850 PRO are still duking it out for first place at QD=2.

QD=4

View Full Size

  • As demand rises, we see the once vertical latency profiles start to slope and taper for all models, but the 850 EVO V2 continues to do well and keeps pace with the 850 PRO.

QD=8

View Full Size

  • At this high of a load and just as we saw with writes, the 850 EVO V2 is finally outpaced by the 850 PRO, but the compounding latencies of the slower 850 EVO V1 have now pushed it out to 19us behind the pack (the gaps appear similar as we are on a logarithmic scale).

Conclusion

We are happy to confirm that there is nothing to worry about with Samsung’s mid-line swap of V-NAND in their 850 EVO line of SSDs. We are even happier to report that the new 48-layer TLC parts enable the 850 EVO to respond even faster to low queue depth IO requests when compared to its 32-layer TLC equipped predecessor, and even faster than the 32-layer MLC equipped 850 PRO! Apart from the noted latency improvements, other metrics such as sequential data transfers and higher queue depth workloads proved negligible in our testing of the 1TB capacity point. If and when Samsung updates their 850 PRO to a V2 / 48-Layer V-NAND combination, we will be standing by to repeat this testing accordingly.

Video News


March 18, 2016 | 09:21 AM - Posted by Anonymous (not verified)

Excellent review as always.

Please do the same comparation with the 250 and 500 GB :I think this are the most popular capacity

March 18, 2016 | 09:38 AM - Posted by marian4q

Great review

March 18, 2016 | 10:17 AM - Posted by ryanbush81

Very good. I feel way better about my purchases knowing that PCPER is there to hold these manufactures accountable for their products.

Keep up the great work guys!

March 18, 2016 | 12:41 PM - Posted by Xebec

Thanks Allyn! Always appreciate these details!

March 18, 2016 | 01:37 PM - Posted by sensacion7

As always, a great review from Alyn thanks

#AllynTeaches

March 18, 2016 | 02:27 PM - Posted by CB (not verified)

Great review. Love that you went the extra mile to get this in depth technical knowledge.

Is the 48-layer V-nand going to be used in all capacities?

How can one tell which one they have or purchase?

March 18, 2016 | 06:41 PM - Posted by Allyn Malventano

Not sure on packaging differences for the V2 (more to follow there).

48-layer will replace 32-layer in all capacities of the EVO (so far), but they are dropping the 120GB, most likely since it could not maintain expected performance with only four dies. 120GB is now covered by the 750 EVO.

March 20, 2016 | 03:03 AM - Posted by Krutou (not verified)

Any idea when the V2 version will roll out to retailers?

March 18, 2016 | 02:51 PM - Posted by Kevin (not verified)

Yes I would like to know also how you can tell which one you have. I'm looking at an 850 evo I bought a few weeks ago and have not installed yet.

March 18, 2016 | 04:36 PM - Posted by D1RTYD1Z619

Allyntek has the best ssd reviews

March 18, 2016 | 07:26 PM - Posted by MrTbagz (not verified)

on the next podcast please discuss the news story about Google's SSD Study. They found high-end SLC drives are no more reliable that MLC drives. They also found SSDs fail at a lower rate than disks, but Uncorrectable Bit Error Rate is higher. thx, do a search for 'google ssd study' for more info

March 19, 2016 | 02:28 PM - Posted by Allyn Malventano

I've read the study, and they are correct in how these failures happen, but their gear is not comparable to the consumer products that we test. As an example, Intel Enterprise SSDs (S3700) use different firmware that will intentionally brick them when certain errors are detected, while that same controller with consumer model firmware (SSD 730) will do its best to error correct and push on. Enterprise side does this because it is easier for them to just swap a drive, rebuild, and get on with their day. Enterprise settigns are more tolerant of errors in favor of higher / more consistent performance.

March 20, 2016 | 07:24 PM - Posted by Glaring_Mistake (not verified)

But weren't they also prone to silent errors meaning that they were never reported?

March 18, 2016 | 09:22 PM - Posted by Gamesing (not verified)

Is there a possibility that you could do a three drive Raid 0 test for speed and heat the way you did for your Triple M.2 Samsung 950 Pro Z170 PCIe NVMe RAID test.

Also, do know if Intel is going to change their controller for M.2 and SATA so that you do not loose any SATA ports when using M.2 ?

March 19, 2016 | 02:31 PM - Posted by Allyn Malventano

Currently Z170 trades M.2 for a pair of SATA. Some motherboards can arrange the lanes such that one M.2 can be used with no impact on SATA lanes (but the next M.2 will take four).

For a SATA RAID test, we don't have three of any sufficiently speedy idential SATA models on hand, but I can say that the scaling results tend to be similar. SATA just runs at a slower bus speed, but the IOPS scales similarly.

March 18, 2016 | 11:03 PM - Posted by Anonymous (not verified)

Great stuff, thanks!

March 18, 2016 | 11:23 PM - Posted by MRFS (not verified)

U B D'BEST, Allyn. MANY THANKS!

MRFS

March 19, 2016 | 03:01 AM - Posted by khanmein (not verified)

@Allyn Malventano i confused. which one model version is MLC??? thanks

EVO >> TLC

PRO >> MLC

March 19, 2016 | 02:32 PM - Posted by Allyn Malventano

EVO V1 >> TLC

EVO V2 >> TLC

PRO >> MLC

March 20, 2016 | 11:56 PM - Posted by khanmein (not verified)

thanks boss.

March 19, 2016 | 11:30 AM - Posted by Anonymous (not verified)

As long they keep their prices competitive , looks like I'll keep buying. Altho in my experience i swapped a 120gb 840 evo for a 120 850 evo, boot drive, and foubd the 850 slower. And thats just by getting the microsoft flag on screen. Migration done by samsung software. Weird eh?

March 19, 2016 | 02:33 PM - Posted by Allyn Malventano

Yeah something weird is going on there, because 850's walk all over 840's generally.

March 20, 2016 | 03:06 AM - Posted by Anonymous (not verified)

Until the 500GB and 250GB versions are tested and compared I don't see the the guarantee that the 1TB version would be indicative of no performance loss.

Samsung's controller is likely optimized for 4 dies per channel with 8 channels. This is why when you look at V1 500GB and 1TB comparisons the performance is nearly the same (including official specs) as the 500GB already has the 32 dies necessary for saturation. Whereas there is a drop off in write performance moving to 250GB due to the drop off to 16 dies.

So we would need to see what happens with the 500GB and 250GB versions due to the now lower number of controllers. Do they actually match the performance of their V1 counterparts? I'd say it's interesting that they chose to sample the 1TB V2 with it's 32 dies.

March 21, 2016 | 12:54 PM - Posted by Allyn Malventano

It's not about the number of dies per controller as much as it is about the TLC write speed of those individual dies. You are correct that the TLC write speed of a 250GB V1 vs. V2 may be different, but in practical use, these consumer drives almost never operate at TLC speeds (How often does any typical consumer write >3GB of data at >270 MB/s?).

March 22, 2016 | 03:41 PM - Posted by Mr.Meth

Great review as always guys !!

March 26, 2016 | 04:56 PM - Posted by nobody (not verified)

Just for reference, the SSDs V1 and V2 have different P/N:

V1: MZ7LE1T0
V2: MZ7LN1T0

Maybe you people can check if the SSD that you own have the same letter.

See yaa..

March 28, 2016 | 11:26 AM - Posted by Magistar (not verified)

This short review does not seem account for the TurboWrite technology. Basically Turbowrites diminishes the meaning of basic benchmarks like Atto because you will be testing the fast SLC portion only. In such tests even the 250 GB model performance identical to the larger models.

The real question here is; what happens to performance consistency with this new v-nand.

After all benchmarks have showed gaps of over 500% between the 250 GB and 1 TB model in scenario's where the SLC buffer is already full. In stuff like Atto there was no difference to begin with.

April 4, 2016 | 08:47 AM - Posted by Anonymous (not verified)

Hi there,

Can anyone tell me where to purchase these new 48-layer V-NAND drives? I've looked on Amazon and it seems they are still selling the 32 - layer versions. Anyone got a link (specifically for the 1Tb model)??

Thanks in advance.

May 2, 2016 | 09:00 PM - Posted by Dzezik (not verified)

the 48 layer is not better than 32.
it uses less power this is the only advantage
performance is similar
but the consistency of performance is worst
and the endurance is also worst
going to more layers is only because of efficiency

I can tell You I only buy god old 34nm SLC
and it outperforms any MLC TLC including 3D V-NAND.
SLC You can buy 100GB SSD starting from $100 and dont bother the performance at all and forget what is consistency and endurance.
MLC TLC and V-NAND was created only because of efficiency and low cost. SLC is still the best in performance class.

May 2, 2016 | 08:51 PM - Posted by Dzezik (not verified)

the first version 24 layer was used in two models:
SV843 and 845DC PRO.
they differ only in overprovisioning and price
SV843 was only 960GB 7% OP
845DC Pro was max 800GB 28% OP
moving from 7% to 28% in general it doubles the steady state random write twice and extends endurance twice.
but
the biggest improvement cames not from moving 24 to 32 layer but
moving from 300MHz MDX to 500MHz MHX
the 48 layer uses even newer controller

February 3, 2017 | 09:42 AM - Posted by Jose (not verified)

Does anyone know if the performance degradation issue that affected the 840 evo is present in the 850 evo?

September 1, 2017 | 03:15 PM - Posted by Shinjin (not verified)

Does anyone know when the 64 layer TLC available for EVO 850?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.