Subject: Storage | February 14, 2016 - 02:51 PM | Allyn Malventano
Tagged: vnand, ssd, Samsung, nand, micron, Intel, imft, 768Gb, 512GB, 3d nand, 384Gb, 32 Layer, 256GB
You may have seen a wave of Micron 3D NAND news posts these past few days, and while many are repeating the 11-month old news with talks of 10TB/3.5TB on a 2.5"/M.2 form factor SSDs, I'm here to dive into the bigger implications of what the upcoming (and future) generation of Intel / Micron flash will mean for SSD performance and pricing.
Remember that with the way these capacity increases are going, the only way to get a high performance and high capacity SSD on-the-cheap in the future will be to actually get those higher capacity models. With such a large per-die capacity, smaller SSDs (like 128GB / 256GB) will suffer significantly slower write speeds. Taking this upcoming Micron flash as an example, a 128GB SSD will contain only four flash memory dies, and as I wrote about back in 2014, such an SSD would likely see HDD-level sequential write speeds of 160MB/sec. Other SSD manufacturers already recognize this issue and are taking steps to correct it. At Storage Visions 2016, Samsung briefed me on the upcoming SSD 750 Series that will use planar 16nm NAND to produce 120GB and 250GB capacities. The smaller die capacities of these models will enable respectable write performance and will also enable them to discontinue their 120GB 850 EVO as they transition that line to higher capacity 48-layer VNAND. Getting back to this Micron announcement, we have some new info that bears analysis, and that pertains to the now announced page and block size:
256Gb MLC: 16KB Page / 16MB Block / 1024 Pages per Block
384Gb TLC: 16KB Page / 24MB Block / 1536 Pages per Block
To understand what these numbers mean, using the MLC line above, imagine a 16MB CD-RW (Block) that can write 1024 individual 16KB 'sessions' (Page). Each 16KB can be added individually over time, and just like how files on a CD-RW could be modified by writing a new copy in the remaining space, flash can do so by writing a new Page and ignoring the out of date copy. Where the rub comes in is when that CD-RW (Block) is completely full. The process at this point is very similar actually, in that the Block must be completely emptied before the erase command (which wipes the entire Block) is issued. The data has to go somewhere, which typically means writing to empty blocks elsewhere on the SSD (and in worst case scenarios, those too may need clearing before that is possible), and this moving and erasing takes time for the die to accomplish. Just like how wiping a CD-RW took a much longer than writing a single file to it, erasing a Block takes typically 3-4x as much time as it does to program a page.
With that explained, of significance here are the growing page and block sizes in this higher capacity flash. Modern OS file systems have a minimum bulk access size of 4KB, and Windows versions since Vista align their partitions by rounding up to the next 2MB increment from the start of the disk. These changes are what enabled HDDs to transition to Advanced Format, which made data storage more efficient by bringing the increment up from the 512 Byte sector up to 4KB. While most storage devices still use 512B addressing, it is assumed that 4KB should be the minimum random access seen most of the time. Wrapping this all together, the Page size (minimum read or write) is 16KB for this new flash, and that is 4x the accepted 4KB minimum OS transfer size. This means that power users heavy on their page file, or running VMs, or any other random-write-heavy operations being performed over time will have a more amplified effect of wear of this flash. That additional shuffling of data that must take place for each 4KB write translates to lower host random write speeds when compared to lower capacity flash that has smaller Page sizes closer to that 4KB figure.
A rendition of 3D IMFT Floating Gate flash, with inset pulling back some of the tunnel oxide layer to show the location of the floating gate. Pic courtesy Schiltron.
Fortunately for Micron, their choice to carry Floating Gate technology into their 3D flash has netted them some impressive endurance benefits over competing Charge Trap Flash. One such benefit is a claimed 30,000 P/E (Program / Erase) cycle endurance rating. Planar NAND had dropped to the 3,000 range at its lowest shrinks, mainly because there was such a small channel which could only store so few electrons, amplifying the (negative) effects of electron leakage. Even back in the 50nm days, MLC ran at ~10,000 cycle endurance, so 30,000 is no small feat here. The key is that by using that same Floating Gate tech so good at controlling leakage for planar NAND on a new 3D channel that can store way more electrons enables excellent endurance that may actually exceed Samsung's Charge Trap Flash equipped 3D VNAND. This should effectively negate the endurance hit on the larger Page sizes discussed above, but the potential small random write performance hit still stands, with a possible remedy being to crank up the Over-Provisioning of SSDs (AKA throwing flash at the problem). Higher OP means less active pages per block and a reduction in the data shuffling forced by smaller writes.
A 25nm flash memory die. Note the support logic (CMOS) along the upper left edge.
One final thing helping out Micron here is that their Floating Gate design also enables a shift of 75% of the CMOS circuitry to a layer *underneath* the flash storage array. This logic is typically part of what you see 'off to the side' of a flash memory die. Layering CMOS logic in such a way is likely thanks to Intel's partnership and CPU development knowledge. Moving this support circuitry to the bottom layer of the die makes for less area per die dedicated to non-storage, more dies per wafer, and ultimately lower cost per chip/GB.
Samsung's Charge Trap Flash, shown in both planar and 3D VNAND forms.
One final thing before we go. If we know anything about how the Intel / Micron duo function, it is that once they get that freight train rolling, it leads to relatively rapid advances. In this case, the changeover to 3D has taken them a while to perfect, but once production gains steam, we can expect to see some *big* advances. Since Samsung launched their 3D VNAND their gains have been mostly iterative in nature (24, 32, and most recently 48). I'm not yet at liberty to say how the second generation of IMFT 3D NAND will achieve it, but I can say that it appears the next iteration after this 32-layer 256Gb (MLC) /384Gb (TLC) per die will *double* to 512Gb/768Gb (you are free to do the math on what that means for layer count). Remember back in the day where Intel launched new SSDs at a fraction of the cost/GB of the previous generation? That might just be happening again within the next year or two.
Subject: General Tech | July 30, 2015 - 02:45 PM | Ken Addison
Tagged: podcast, video, Intel, XPoint, nand, DRAM, windows 10, DirectX 12, freesync, g-sync, amd, nvidia, benq, uhd420, wasabi mango, X99, giveaway
PC Perspective Podcast #360 - 07/30/2015
Join us this week as we discuss Intel XPoint Memory, Windows 10 and DX12, FreeSync displays and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Jeremy Hellstrom, Josh Walrath, and Sebastian Peak
Program length: 1:28:34
Subject: Storage | March 26, 2015 - 02:12 PM | Sebastian Peak
Tagged: storage, ssd, planar, nand, micron, M.2, Intel, imft, floating-gate, 3d nand
Intel and Micron are jointly announcing new 3D NAND technology that will radically increase solid-storage capacity going forward. The companies have indicated that moving to this technology will allow for the type of rapid increases in capacity that are consistent with Moore’s Law.
The way Intel and Micron are approaching 3D NAND is very different from existing 3D technologies from Samsung and now Toshiba. The implementation of floating-gate technology and “unique design choices” has produced startling densities of 256 Gb MLC, and a whopping 384 Gb with TLC. The choice to base this new 3D NAND on floating-gate technology allows development with a well-known entity, and benefits from the knowledge base that Intel and Micron have working with this technology on planar NAND over their long partnership.
What does this mean for consumers? This new 3D NAND enables greater than 10TB capacity on a standard 2.5” SSD, and 3.5TB on M.2 form-factor drives. These capacities are possible with the industry’s highest density 3D NAND, as the >3.5TB M.2 capacity can be achieved with just 5 packages of 16 stacked dies with 384 Gb TLC.
A 3D NAND cross section from Allyn's Samsung 850 Pro review
While such high density might suggest reliance on ever-shrinking process technology (and the inherent loss of durability thus associated) Intel is likely using a larger process for this NAND. Though they would not comment on this, Intel could be using something roughly equivalent to 50nm flash with this new 3D NAND. In the past die shrinks have been used to increase capacity per die (and yields) such as IMFT's move to 20nm back in 2011, but with the ability to achieve greater capacity vertically using 3D cell technology a smaller process is not necessary to achieve greater density. Additionally, working with a larger process would allow for better endurance as, for example, 50nm MLC was on the order of 10,000 program/erase cycles. Samsung similarly moved to a larger process with with their initial 3D NAND, moving from their existing 20nm technology back to 30nm with 3D production.
This announcement is also interesting considering Toshiba has just entered this space as well having announced 48-layer 128 Gb density 3D NAND, and like Samsung, they are moving away from floating-gate and using their own charge-trap implementation they are calling BiCS (Bit Cost Scaling). However with this Intel/Micron announcement the emphasis is on the ability to offer a 3x increase in capacity using the venerable floating-gate technology from planar NAND, which gives Intel / Micron an attractive position in the market - depending on price/performance of course. And while these very large capacity drives seem destined to be expensive at first, the cost structure is likely to be similar to current NAND. All of this remains to be seen, but this is indeed promising news for the future of flash storage as it will now scale up to (and beyond) spinning media capacity - unless 3D tech is implemented in hard drive production, that is.
So when will Intel and Micron’s new technology enter the consumer market? It could be later this year as Intel and Micron have already begun sampling the new NAND to manufacturers. Manufacturing has started in Singapore, plus ground has also been broken at the IMFT fab in Utah to support production here in the United States.
It has become increasingly apparent that flash memory die shrinks have hit a bit of a brick wall in recent years. The issues faced by the standard 2D Planar NAND process were apparent very early on. This was no real secret - here's a slide seen at the 2009 Flash Memory Summit:
Despite this, most flash manufacturers pushed the envelope as far as they could within the limits of 2D process technology, balancing shrinks with reliability and performance. One of the largest flash manufacturers was Intel, having joined forces with Micron in a joint venture dubbed IMFT (Intel Micron Flash Technologies). Intel remained in lock-step with Micron all the way up to 20nm, but chose to hold back at the 16nm step, presumably in order to shift full focus towards alternative flash technologies. This was essentially confirmed late last week, with Intel's announcement of a shift to 3D NAND production.
Intel's press briefing seemed to focus more on cost efficiency than performance, and after reviewing the very few specs they released about this new flash, I believe we can do some theorizing as to the potential performance of this new flash memory. From the above illustration, you can see that Intel has chosen to go with the same sort of 3D technology used by Samsung - a 32 layer vertical stack of flash cells. This requires the use of an older / larger process technology, as it is too difficult to etch these holes at a 2x nm size. What keeps the die size reasonable is the fact that you get a 32x increase in bit density. Going off of a rough approximation from the above photo, imagine that 50nm die (8 Gbit), but with 32 vertical NAND layers. That would yield a 256 Gbit (32 GB) die within roughly the same footprint.
Representation of Samsung's 3D VNAND in 128Gbit and 86 Gbit variants.
20nm planar (2D) = yellow square, 16nm planar (2D) = blue square.
Image republished with permission from Schiltron Corporation.
It's likely a safe bet that IMFT flash will be going for a cost/GB far cheaper than the competing Samsung VNAND, and going with a relatively large 256 Gbit (vs. VNAND's 86 Gbit) per-die capacity is a smart move there, but let's not forget that there is a catch - write speed. Most NAND is very fast on reads, but limited on writes. Shifting from 2D to 3D NAND netted Samsung a 2x speed boost per die, and another effective 1.5x speed boost due to their choice to reduce per-die capacity from 128 Gbit to 86 Gbit. This effective speed boost came from the fact that a given VNAND SSD has 50% more dies to reach the same capacity as an SSD using 128 Gbit dies.
Now let's examine how Intel's choice of a 256 Gbit die impacts performance:
- Intel SSD 730 240GB = 16x128 Gbit 20nm dies
- 270 MB/sec writes and ~17 MB/sec/die
- Crucial MX100 128GB = 8x128Gbit 16nm dies
- 150 MB/sec writes and ~19 MB/sec/die
- Samsung 850 Pro 128GB = 12x86Gbit VNAND dies
- 470MB/sec writes and ~40 MB/sec/die
If we do some extrapolation based on the assumption that IMFT's move to 3D will net the same ~2x write speed improvement seen by Samsung, combined with their die capacity choice of 256Gbit, we get this:
- Future IMFT 128GB SSD = 4x256Gbit 3D dies
- 40 MB/sec/die x 4 dies = 160MB/sec
Even rounding up to 40 MB/sec/die, we can see that also doubling the die capacity effectively negates the performance improvement. While the IMFT flash equipped SSD will very likely be a lower cost product, it will (theoretically) see the same write speed limits seen in today's SSDs equipped with IMFT planar NAND. Now let's go one layer deeper on theoretical products and assume that Intel took the 18-channel NVMe controller from their P3700 Series and adopted it to a consumer PCIe SSD using this new 3D NAND. The larger die size limits the minimum capacity you can attain and still fully utilize their 18 channel controller, so with one die per channel, you end up with this product:
- Theoretical 18 channel IMFT PCIE 3D NAND SSD = 18x256Gbit 3D dies
- 40 MB/sec/die x 18 dies = 720 MB/sec
- 18x32GB (die capacity) = 576GB total capacity
Overprovisioning decisions aside, the above would be the lowest capacity product that could fully utilize the Intel PCIe controller. While the write performance is on the low side by PCIe SSD standards, the cost of such a product could easily be in the $0.50/GB range, or even less.
In summary, while we don't have any solid performance data, it appears that Intel's new 3D NAND is not likely to lead to a performance breakthrough in SSD speeds, but their choice on a more cost-effective per-die capacity for their new 3D NAND is likely to give them significant margins and the wiggle room to offer SSDs at a far lower cost/GB than we've seen in recent years. This may be the step that was needed to push SSD costs into a range that can truly compete with HDD technology.
Subject: General Tech | August 6, 2013 - 01:56 AM | Tim Verry
Tagged: Samsung, vnand, vertical nand, charge trap flash, 128Gb, nand
Today, Samsung announced that it has begun mass production of a new kind of 3D NAND flash memory that offers up higher reliability and write performance versus traditional 2d “planar” technologies. The so-called VNAND (Vertical NAND) is currently being used in 128Gb (Gigabit) flash chips (matching current 2D flash chips), but the technology has the potential to go much further in terms of capacity.
The VNAND combines an updated version of Samsung’s Charge Trap Flash (CTF) technology (originally developed in 2006) with a vertical stacking and interconnect technology that uses special etching techniques to punch holes and electrical connections down from the top of the highest die to the bottom die.
Samsung claims that its proprietary interconnect technology is (currently) able to support up to 24 layers of flash memory. The resulting VNAND offers up to twice the write performance and between 2-times and 10-times higher reliability versus traditional 19nm floating gate NAND (the alternative to CTF NAND) developed on planar processes.
With traditional NAND flash, as flash density increases (such as the move from 25nm to 19nm NAND flash), inter-cell interference also increases due to thinner walls and increased leakage. Samsung is hoping to solve that problem with its vertically-stacked NAND by allowing density to increase without dealing with shrinking the individual layers. Further, each layer is separated by a dielectric (electric insulator) that is currently 50nm and constructed of Silicon Nitride (SiN). The company notes that there is a limit to the height at which flash can be stacked before it becomes un-economical, but that is still a ways off compared to where NAND flash is now as far as densities seen in the wild.
Samsung’s new 128Gb VNAND chip is expected to scale to at least 1Tb depending on consumer demand. The technology is aimed at both embedded NAND and SSDs, but the former is likely to make use of 3D vertical NAND first. Standard 2.5" SSDs could also benefit but modern SSDs are already bottle-necked by the SATA III 6Gbps bus much less by faster write speed potential. Mobile devices, however, could benefit from faster single-chip VNAND packages immediately with faster write speeds and higher reliability (and potentially, density) versus 2D NAND chips.
It is definitely a technology with potential that is worth keeping an eye on.
The full press release can be found over at Engadget.
Subject: Storage | July 31, 2013 - 05:34 AM | Tim Verry
Tagged: diablo technologies, DIMM, nand, flash memory, memory channel storage
Ottawa-based Diablo Technologies unveiled a new flash storage technology yesterday that it calls Memory Channel Storage. As the name suggest, the storage technology puts NAND flash into a DIMM form factor and interfaces the persistent storage directly with the processor via the integrated memory controller.
The Memory Channel Storage (MCS) is a drop-in replacement for DDR3 RDIMMs (Registered DIMMs) in servers and storage arrays. Unlike DRAM, MCS is persistent storage backed by NAND flash and it can allow servers to have Terabytes of storage connected to the CPU via the memory interface instead of mere Gigabytes of DRAM acting as either system memory or block storage. The photo provided in the technology report (PDF) shows a 400GB MCS module that can slot into a standard DIMM slot, for example.
Diablo Technologies claims that MCS exhibits lower latencies and higher bandwidth than PCI-E and SATA SSDs. More importantly, the storage latency is predictable and consistent, making it useful for applications such as high frequency stock trading where speed and deterministic latency are paramount. Further, users can get linear increases in throughput with each additional Memory Channel Storage module added to the system. Latencies with MCS are as much as 85% lower with PCI-E SSDs and 96% lower than SAS/SATA SSDs according to Diablo Technologies. NAND flash maintenance such as wear leveling is handled via an on-board logic chip.
Diablo Technologies is also aiming the MCS technology at servers running big data analytics, massive cloud databases, financial applications, server virtualization, and Virtual Desktop Infrastructure (VDI). MCS can act as super fast local storage or as a local cache for external (such as networked) storage in the form of mechanical hard drives or SSDs.
Some details about Memory Channel Storage are still unclear, but it looks promising. It will not be quite as fast in random access as DRAM but it will be better (more bandwidth, lower latency) than both PCI-E and SATA-connected NAND flash-based SSDs. It would be awesome to see this kind of tech make its way to consumer systems so that I can have a physical RAMDisk with fast persistent storage (at least as far as form factor, MCS uses NAND not DRAM chips).
The full press release can be found here.
Subject: Storage | July 26, 2013 - 12:35 AM | Tim Verry
Tagged: turbo sshd, sshd, Seagate, nand, enterprise
Earlier this week Seagate took the wraps off of its latest Solid State Hybrid Drive (SSHD). Dubbed the Enterprise Turbo SSHD, this latest model is aimed at the enterprise server market. The drives combine a traditional 10K SAS mechanical hard drive in capacities up to 600GB with up to 32GB of NAND flash.
The 2.5" Enterprise Turbo SSHDs are aimed at servers with big data analytics, virtual desktops, and transaction processing workloads. The NAND flash acts as a cache for the mechanical hard drive, and caching is done by the controller at an I/O level.
According to Seagate, the company has been working with IBM over the past year to put the new SSHD through its paces. As such, the hybrid drives will first be available in the IBM X and BladeCentral servers. The IBM versions will have 16GB of NAND flash and one year warranties according to the documentation available online.
Seagate further claims up to three times random performance increase versus 15K SAS mechanical hard drives. The 600GB 10K SSHD is rated to have up to two times better IOPS than a traditional 10K SAS hard drive without a NAND cache.
The Enterprise Turbo also comes with enterprise-friendly drive self encryption options. The Seagate product page notes that the Enterprise Turbo SSHD will have a five year warranty. Pricing and detailed benchmarks are not yet available though some preliminary performance results can be found here.
The full press release can be found here.
Subject: General Tech, Storage | July 18, 2013 - 02:29 AM | Tim Verry
Tagged: nand, micron, flash, 16nm
Micron recently announced that is has begun sampling 16nm NAND flash to select partners. Micron expects to begin full production of the NAND chips using the smaller flash manufacturing process in the fourth quarter of this year (Q4 2013). Drives based on its new 16nm MLC NAND flash are expected to arrive as early as next year. (PC Perspective's own storage expert is currently overseas, but I managed to reach out over email to get some clarification, and his thoughts, on the Micron annuoncement.)
The announcement relates to new NAND flash that is smaller, but not necessarily faster, than the existing 20nm and 25nm flash chips used in current solid state drives. In the end, Micron is still delivering 128Gb (Gigabit) per die, but using a 16nm process. The 16nm flash is a pure shrink of 20nm which is, in turn, a shrink of 25nm flash. In fact, Micron is able to get just under 6 Terabytes of storage out of a single 300mm wafer. These wafers are then broken down into dies in individual flash chips that are used in all manner of solid state storage devices from smartphone embedded storage to desktop SSDs. This 16nm flash still delivers 128Gb --which is 16GB-- per die allowing for a 128GB SSD using as few as eight chips.
A single 16nm NAND flash die with a SSD in the background
Micron expects the 16nm MLC (multi-level cell) flash to be used in consumer SSDs, USB thumb drives, mobile devices, and cloud storage.
The 16nm process will allow Micron to get more storage out of the same sized wafer (300mm) used for current processes, which in theory should mean flash memory that is not only smaller, but (in theory) cheaper.
A single wafer of 16nm NAND flash (just under 6TBs)
As Allyn further notes, the downside to the new 16nm NAND flash is a reduction in the number of supported PE cycles. Micron has not released specific information on this, but the new 16nm MLC flash is expected to have fewer than 1,000 P/E cycles. For comparison, 25nm and 20nm flash has P/E cycles of 3,000 and 1,000 respectively.
In simple terms, P/E (program-erase) cycles relate to the number of times that a specific portion of flash memory can be written to before wearing out. SSD manufacturers were able to work around this with the transition from 25nm to 20nm and still deliver acceptable endurance on consumer drives, and I expect that similar techniques will be used to do the same for 16nm flash. For example, manufactuers could enable compression that is used prior to writing out the data to the physical flash or over-provisioning the actual hardware versus the reported software capacity (ie a drive sold as a 100GB model that actually has 128GB of physical flash).
I don't think it will be a big enough jump that typical consumers wil have to worry too much about this, considering the vast majority of operations will be read operations and not writes. Despite the reduction in P/E cycles, SSDs with 16nm NAND MLC flash will still likely out-last a typical mechanical hard drive.
What do you think about the Micron announcement?
The full press release can be found below:
Subject: General Tech | November 30, 2012 - 01:38 PM | Jeremy Hellstrom
Tagged: nand, EMC, phase change memory
SSDs are not that old but already there is a challenge that must be overcome if it is to remain a viable storage medium. As Allyn has discussed many times in articles and on the podcast, as NAND process shrinks continue, the number of write cycles before failure drops which lowers the life expectancy of the drive even while it allows for high capacity chips and lower power consumption. Zahid Hussain is EMC's flash product division general manager and he is confident that his company will be able to do what Hynix, Samsung and others have so far been unable to do; work with Micron to replace the NAND chips with Phase Change Memory based chips. This type of chip is non-volatile and could also find its way into DIMMs as well. Read more at The Register.
"It is anticipated that, as NAND process geometries shrink beyond 15nm or so, the working life will fall off drastically, speed will slacken and the error checking and correction logic will become much more complicated. At that point, roughly, it is hoped, a post-NAND technology will be productised and deliver chips that are denser than flash, faster than flash, approaching DRAM speed, byte-addressable instead of block-addressable, and with a longer working life. That seems like a real big ask."
Here is some more Tech News from around the web:
- RIM announces updates to developer ecosystem programs @ The Register
- Updating the 2012 AnandTech SMB / SOHO NAS Testbed
- Guru3D Rig of the Month - November 2012
- Double Fine’s Brad Muir dishes out BRAZEN details @ Kitguru
- In Calculator Arms Race, Casio Fires Back: Color Touchscreen ClassPad @ Slashdot
Podcast #221 - Intel Clover Trail, AMD's Trinity Desktop APUs, the Samsung 840 SSD with TLC, and more!
Subject: General Tech | October 4, 2012 - 02:56 PM | Ken Addison
Tagged: trinity, TLD, ssd, Samsung, podcast, nand, clover trail, APU, a8, A10-5800k, a10, 830
PC Perspective Podcast #221 - 10/04/2012
Join us this week as we talk about Intel Clover Trail, AMD's Trinity Desktop APUs, the Samsung 840 SSD with TLC, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom Allyn Malvantano, and Scott Michaud
Program length: 1:21:21
Podcast topics of discussion:
- Week in Reviews:
- 0:49:00 This podcast is brought to you by alxTech
- News items of interest: