FMS 2016: Samsung To Announce 64-Layer 4th Gen V-NAND, 32TB 2.5" SSD

Subject: Storage | August 10, 2016 - 02:00 PM |
Tagged: 2.5, V-NAND, ssd, Samsung, nand, FMS 2016, FMS, flash, 64-Layer, 32TB, SAS, datacenter

At a huge press event like Flash Memory Summit, being in the right place at the right time (and with the right camera), matters greatly. I'll just let a picture say a thousand words for me here:

View Full Size this picture has been corrected for extreme parallax and was taken in far from ideal conditions, but you get the point. Samsung's keynote is coming up later today, and I have a hunch this will be a big part of what they present. We did know 64-Layer was coming, as it was mentioned in Samsung's last earnings announcement, but confirmation is nice.

*edit* now that the press conference has taken place, here are a few relevant slides:

View Full Size

View Full Size

With 48-Layer V-NAND announced last year (and still rolling out), it's good to see Samsung pushing hard into higher capacity dies. 64-Layer enables 512Gbits (64GB) per die, and 100MB/s per die maximum throughput means even lower capacity SSDs should offer impressive sequentials.

View Full Size

Samsung 48-Layer V-NAND. Pic courtesy of TechInsights.

64-Layer is Samsung's 4th generation of V-NAND. We've seen 48-Layer and 32-Layer, but few know that 24-Layer was a thing (but was mainly in limited enterprise parts).

We will know more shortly, but for now, dream of even higher capacity SSDs :)

*edit* and this just happened:

View Full Size

*additional edit* - here's a better picture taken after the keynote:

View Full Size

View Full Size

The 32TB model in their 2.5" form factor displaces last years 16TB model. The drive itself is essentially identical, but the flash packages now contain 64-layer dies, doubling the available capacity of the device.

Video News

August 10, 2016 | 02:25 PM - Posted by Mobile_Dom

so with this mean even more lower capacity models will be phased out to not incur the performance loss of high capacity but fewer dies? or will they just use lower capacity dies

August 11, 2016 | 11:37 PM - Posted by Anonymous (not verified)

The lower capacity models would be phased out eventually anyway. Hopefully this will move larger capacity drives down in price. They may want to put multiple interfaces on each die if the write performance is insufficient. I thought there was already some design that implemented something similar to that, but I don't remember what it was. It would be easy to make a single die look like multiple die to the controller. They also could solve this with TSV interconnect. The stack could easily present multiple interfaces to the controller.

August 10, 2016 | 03:42 PM - Posted by JosiahBradley

Didn't seagate just announce a 60TB SSD? Isn't 60 > 32?

August 10, 2016 | 04:06 PM - Posted by Anonymously Anonymous (not verified)

yup, clearly Samsung didn't get the memo.

August 10, 2016 | 08:38 PM - Posted by Allyn Malventano

Samsung's is in 2.5" form factor. Seagate is 3.5".

August 10, 2016 | 09:26 PM - Posted by AlbertaMalventano (not verified)

Each marketing sign clearly makes the claim that it is the highest capacity sad in the world in a lovely large bold font. Samsung looks like a fool for putting up that sign.

August 10, 2016 | 05:07 PM - Posted by robertk (not verified)

they did. read their paper precisely :)
Samsung's - world's largest 32TB SSD ...
guessing no other samsungs have bigger one than 32GB ...

August 12, 2016 | 11:01 AM - Posted by Anonymous (not verified)

it doesn't matter much because is the best in the business.

August 10, 2016 | 06:24 PM - Posted by Anonymous (not verified)

That's a tall order at the IHOP, but I hear that going beyond 64 layers is going to be accomplished by stacking 2 or more milt-layer chips because of the alignment issues with the layers making things more difficult as the number of layers gets higher.

August 10, 2016 | 06:29 PM - Posted by Anonymous (not verified)

Edit: milt-layer

To: multi-layer

LibreOffice still does not have the "Multi" prefix added to their english dictionary.

August 10, 2016 | 08:40 PM - Posted by Allyn Malventano

Stacking dies is already a thing - it's how the flash dies are packaged.

August 11, 2016 | 10:03 AM - Posted by Anonymous (not verified)

Are most of them using TSV now or are they still offset with edge connected wires?

August 11, 2016 | 12:30 PM - Posted by Allyn Malventano

Toshiba is TSV, but I believe Samsung is still edge.

August 11, 2016 | 05:15 PM - Posted by Anonymous (not verified)

There is probably an optimum number of layers. Going up too many layers will increase die cost due to added process steps and more defects. Similar situation with x/y die size; larger die increases the number of defective die. We have had stacking of flash die for a long time, but they were using offset die with edge connections. They are starting to use TSV, but the bandwidth per die isn't really high enough to require it yet.

August 10, 2016 | 10:47 PM - Posted by Pixy Misa (not verified)

The high density combined with the improved speed and write endurance is really great. They just need to work on the price!

I wonder how the cost of building new fabs compares to the cost of all the extra processing steps for multi-layer flash. If they can keep existing fabs running for many more years, is that enough to keep driving down the price?

August 11, 2016 | 10:05 AM - Posted by Anonymous (not verified)

They went back to an older, larger process with the initial VNAND. Not sure what they are using now.

August 11, 2016 | 11:07 AM - Posted by Anonymous (not verified)

Yes larger in the X and Y but more layers in the Z is the way to go, with plenty of atoms to retain all those states over longer periods of time. They should try and get beyond 64 layers or stack more multi-layer dies using TSVs to save as much space as possible. Now get some hybrid 2TB drives with at least 32/64 GB of XPoint cache and the right amount of DRAM to buffer the XPoint. I'd really like the controller to be able to manage the XPoint and NAND like a tiered storage system and keep the most active data on the XPoint like the paging swap, and essential OS/application files and the rest of the files on the 2 TB of NAND. Let the much more durable and faster than NAND XPoint take the wear and tare by keeping the active blocks on the XPoint as much as possible

If they could set up the XPoint Cache to be multi-way set associative with whatever associated blocks of NAND to try and keep as much of the active blocks of storage staged on the XPoint then it would be easy to offer 5+ years warranty on any similar SSD.

August 11, 2016 | 03:07 PM - Posted by Anonymous (not verified)

There is going to be an optimum z height. More layers means more expensive and more chances to create defects. Each layer adds extra processing steps making the die more expensive. Stacking multiple die with TSV is not free either. You still need area for the TSV through channels also. I don't know how economical going above 64 layers pe die will be.

For using it as a cache, this is far enough out in the memory hierarchy that associativity is not really a concern. You would generally use large size blocks and allow fully associative placement. This is how virtual memory systems work. Block size is generally a 4K page, I believe, and a page can generally be placed anywhere in memory that isn't reserved.

August 11, 2016 | 09:02 AM - Posted by John H (not verified)

Are these 'uber drives' (32/60TB) likely to be very high power consumption due to the total # of transistors involved?

Is there a point where the operating power of a SSD is substantially higher than a HDD? (do NAND cells need to be 'kept warm' for access time to be fast?)

August 11, 2016 | 12:32 PM - Posted by Allyn Malventano

The simple answer is not really. Power consumption really comes from the raw throughput (it takes x power to program a cell). That 60TB Seagate SSD was <20W because it can only program cells at a rate ~1.5 GB/s, limited by the interface. Sure there is per-die standby power, but it is minimal. That same 60TB SSD standby power was ~4W.

August 12, 2016 | 06:50 AM - Posted by John H (not verified)

Thank you!

August 11, 2016 | 12:27 PM - Posted by Anonymous (not verified)

Most of the power is consumed by the controller and the DRAM cache. The flash die have a relatively small number of active circuits, so power consumption is quite low. This what allows them to stack the flash with the controller on top. I think even Samsung's PCIe drives (higher transfer rate takes more power) only consume about 5 to 7 watts max.

August 11, 2016 | 03:10 PM - Posted by Anonymous (not verified)

Meant as reply to power consumption question.

August 25, 2016 | 04:29 AM - Posted by clipping path (not verified)

I have read your blog post on. Thank you very much for sharing this awesome post.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.