It has become increasingly apparent that flash memory die shrinks have hit a bit of a brick wall in recent years. The issues faced by the standard 2D Planar NAND process were apparent very early on. This was no real secret - here's a slide seen at the 2009 Flash Memory Summit:
Despite this, most flash manufacturers pushed the envelope as far as they could within the limits of 2D process technology, balancing shrinks with reliability and performance. One of the largest flash manufacturers was Intel, having joined forces with Micron in a joint venture dubbed IMFT (Intel Micron Flash Technologies). Intel remained in lock-step with Micron all the way up to 20nm, but chose to hold back at the 16nm step, presumably in order to shift full focus towards alternative flash technologies. This was essentially confirmed late last week, with Intel's announcement of a shift to 3D NAND production.
Intel's press briefing seemed to focus more on cost efficiency than performance, and after reviewing the very few specs they released about this new flash, I believe we can do some theorizing as to the potential performance of this new flash memory. From the above illustration, you can see that Intel has chosen to go with the same sort of 3D technology used by Samsung - a 32 layer vertical stack of flash cells. This requires the use of an older / larger process technology, as it is too difficult to etch these holes at a 2x nm size. What keeps the die size reasonable is the fact that you get a 32x increase in bit density. Going off of a rough approximation from the above photo, imagine that 50nm die (8 Gbit), but with 32 vertical NAND layers. That would yield a 256 Gbit (32 GB) die within roughly the same footprint.
Representation of Samsung's 3D VNAND in 128Gbit and 86 Gbit variants.
20nm planar (2D) = yellow square, 16nm planar (2D) = blue square.
Image republished with permission from Schiltron Corporation.
It's likely a safe bet that IMFT flash will be going for a cost/GB far cheaper than the competing Samsung VNAND, and going with a relatively large 256 Gbit (vs. VNAND's 86 Gbit) per-die capacity is a smart move there, but let's not forget that there is a catch - write speed. Most NAND is very fast on reads, but limited on writes. Shifting from 2D to 3D NAND netted Samsung a 2x speed boost per die, and another effective 1.5x speed boost due to their choice to reduce per-die capacity from 128 Gbit to 86 Gbit. This effective speed boost came from the fact that a given VNAND SSD has 50% more dies to reach the same capacity as an SSD using 128 Gbit dies.
Now let's examine how Intel's choice of a 256 Gbit die impacts performance:
- Intel SSD 730 240GB = 16x128 Gbit 20nm dies
- 270 MB/sec writes and ~17 MB/sec/die
- Crucial MX100 128GB = 8x128Gbit 16nm dies
- 150 MB/sec writes and ~19 MB/sec/die
- Samsung 850 Pro 128GB = 12x86Gbit VNAND dies
- 470MB/sec writes and ~40 MB/sec/die
If we do some extrapolation based on the assumption that IMFT's move to 3D will net the same ~2x write speed improvement seen by Samsung, combined with their die capacity choice of 256Gbit, we get this:
- Future IMFT 128GB SSD = 4x256Gbit 3D dies
- 40 MB/sec/die x 4 dies = 160MB/sec
Even rounding up to 40 MB/sec/die, we can see that also doubling the die capacity effectively negates the performance improvement. While the IMFT flash equipped SSD will very likely be a lower cost product, it will (theoretically) see the same write speed limits seen in today's SSDs equipped with IMFT planar NAND. Now let's go one layer deeper on theoretical products and assume that Intel took the 18-channel NVMe controller from their P3700 Series and adopted it to a consumer PCIe SSD using this new 3D NAND. The larger die size limits the minimum capacity you can attain and still fully utilize their 18 channel controller, so with one die per channel, you end up with this product:
- Theoretical 18 channel IMFT PCIE 3D NAND SSD = 18x256Gbit 3D dies
- 40 MB/sec/die x 18 dies = 720 MB/sec
- 18x32GB (die capacity) = 576GB total capacity
Overprovisioning decisions aside, the above would be the lowest capacity product that could fully utilize the Intel PCIe controller. While the write performance is on the low side by PCIe SSD standards, the cost of such a product could easily be in the $0.50/GB range, or even less.
In summary, while we don't have any solid performance data, it appears that Intel's new 3D NAND is not likely to lead to a performance breakthrough in SSD speeds, but their choice on a more cost-effective per-die capacity for their new 3D NAND is likely to give them significant margins and the wiggle room to offer SSDs at a far lower cost/GB than we've seen in recent years. This may be the step that was needed to push SSD costs into a range that can truly compete with HDD technology.
Subject: General Tech | August 6, 2013 - 01:56 AM | Tim Verry
Tagged: Samsung, vnand, vertical nand, charge trap flash, 128Gb, nand
Today, Samsung announced that it has begun mass production of a new kind of 3D NAND flash memory that offers up higher reliability and write performance versus traditional 2d “planar” technologies. The so-called VNAND (Vertical NAND) is currently being used in 128Gb (Gigabit) flash chips (matching current 2D flash chips), but the technology has the potential to go much further in terms of capacity.
The VNAND combines an updated version of Samsung’s Charge Trap Flash (CTF) technology (originally developed in 2006) with a vertical stacking and interconnect technology that uses special etching techniques to punch holes and electrical connections down from the top of the highest die to the bottom die.
Samsung claims that its proprietary interconnect technology is (currently) able to support up to 24 layers of flash memory. The resulting VNAND offers up to twice the write performance and between 2-times and 10-times higher reliability versus traditional 19nm floating gate NAND (the alternative to CTF NAND) developed on planar processes.
With traditional NAND flash, as flash density increases (such as the move from 25nm to 19nm NAND flash), inter-cell interference also increases due to thinner walls and increased leakage. Samsung is hoping to solve that problem with its vertically-stacked NAND by allowing density to increase without dealing with shrinking the individual layers. Further, each layer is separated by a dielectric (electric insulator) that is currently 50nm and constructed of Silicon Nitride (SiN). The company notes that there is a limit to the height at which flash can be stacked before it becomes un-economical, but that is still a ways off compared to where NAND flash is now as far as densities seen in the wild.
Samsung’s new 128Gb VNAND chip is expected to scale to at least 1Tb depending on consumer demand. The technology is aimed at both embedded NAND and SSDs, but the former is likely to make use of 3D vertical NAND first. Standard 2.5" SSDs could also benefit but modern SSDs are already bottle-necked by the SATA III 6Gbps bus much less by faster write speed potential. Mobile devices, however, could benefit from faster single-chip VNAND packages immediately with faster write speeds and higher reliability (and potentially, density) versus 2D NAND chips.
It is definitely a technology with potential that is worth keeping an eye on.
The full press release can be found over at Engadget.
Subject: Storage | July 31, 2013 - 05:34 AM | Tim Verry
Tagged: diablo technologies, DIMM, nand, flash memory, memory channel storage
Ottawa-based Diablo Technologies unveiled a new flash storage technology yesterday that it calls Memory Channel Storage. As the name suggest, the storage technology puts NAND flash into a DIMM form factor and interfaces the persistent storage directly with the processor via the integrated memory controller.
The Memory Channel Storage (MCS) is a drop-in replacement for DDR3 RDIMMs (Registered DIMMs) in servers and storage arrays. Unlike DRAM, MCS is persistent storage backed by NAND flash and it can allow servers to have Terabytes of storage connected to the CPU via the memory interface instead of mere Gigabytes of DRAM acting as either system memory or block storage. The photo provided in the technology report (PDF) shows a 400GB MCS module that can slot into a standard DIMM slot, for example.
Diablo Technologies claims that MCS exhibits lower latencies and higher bandwidth than PCI-E and SATA SSDs. More importantly, the storage latency is predictable and consistent, making it useful for applications such as high frequency stock trading where speed and deterministic latency are paramount. Further, users can get linear increases in throughput with each additional Memory Channel Storage module added to the system. Latencies with MCS are as much as 85% lower with PCI-E SSDs and 96% lower than SAS/SATA SSDs according to Diablo Technologies. NAND flash maintenance such as wear leveling is handled via an on-board logic chip.
Diablo Technologies is also aiming the MCS technology at servers running big data analytics, massive cloud databases, financial applications, server virtualization, and Virtual Desktop Infrastructure (VDI). MCS can act as super fast local storage or as a local cache for external (such as networked) storage in the form of mechanical hard drives or SSDs.
Some details about Memory Channel Storage are still unclear, but it looks promising. It will not be quite as fast in random access as DRAM but it will be better (more bandwidth, lower latency) than both PCI-E and SATA-connected NAND flash-based SSDs. It would be awesome to see this kind of tech make its way to consumer systems so that I can have a physical RAMDisk with fast persistent storage (at least as far as form factor, MCS uses NAND not DRAM chips).
The full press release can be found here.
Subject: Storage | July 26, 2013 - 12:35 AM | Tim Verry
Tagged: turbo sshd, sshd, Seagate, nand, enterprise
Earlier this week Seagate took the wraps off of its latest Solid State Hybrid Drive (SSHD). Dubbed the Enterprise Turbo SSHD, this latest model is aimed at the enterprise server market. The drives combine a traditional 10K SAS mechanical hard drive in capacities up to 600GB with up to 32GB of NAND flash.
The 2.5" Enterprise Turbo SSHDs are aimed at servers with big data analytics, virtual desktops, and transaction processing workloads. The NAND flash acts as a cache for the mechanical hard drive, and caching is done by the controller at an I/O level.
According to Seagate, the company has been working with IBM over the past year to put the new SSHD through its paces. As such, the hybrid drives will first be available in the IBM X and BladeCentral servers. The IBM versions will have 16GB of NAND flash and one year warranties according to the documentation available online.
Seagate further claims up to three times random performance increase versus 15K SAS mechanical hard drives. The 600GB 10K SSHD is rated to have up to two times better IOPS than a traditional 10K SAS hard drive without a NAND cache.
The Enterprise Turbo also comes with enterprise-friendly drive self encryption options. The Seagate product page notes that the Enterprise Turbo SSHD will have a five year warranty. Pricing and detailed benchmarks are not yet available though some preliminary performance results can be found here.
The full press release can be found here.
Subject: General Tech, Storage | July 18, 2013 - 02:29 AM | Tim Verry
Tagged: nand, micron, flash, 16nm
Micron recently announced that is has begun sampling 16nm NAND flash to select partners. Micron expects to begin full production of the NAND chips using the smaller flash manufacturing process in the fourth quarter of this year (Q4 2013). Drives based on its new 16nm MLC NAND flash are expected to arrive as early as next year. (PC Perspective's own storage expert is currently overseas, but I managed to reach out over email to get some clarification, and his thoughts, on the Micron annuoncement.)
The announcement relates to new NAND flash that is smaller, but not necessarily faster, than the existing 20nm and 25nm flash chips used in current solid state drives. In the end, Micron is still delivering 128Gb (Gigabit) per die, but using a 16nm process. The 16nm flash is a pure shrink of 20nm which is, in turn, a shrink of 25nm flash. In fact, Micron is able to get just under 6 Terabytes of storage out of a single 300mm wafer. These wafers are then broken down into dies in individual flash chips that are used in all manner of solid state storage devices from smartphone embedded storage to desktop SSDs. This 16nm flash still delivers 128Gb --which is 16GB-- per die allowing for a 128GB SSD using as few as eight chips.
A single 16nm NAND flash die with a SSD in the background
Micron expects the 16nm MLC (multi-level cell) flash to be used in consumer SSDs, USB thumb drives, mobile devices, and cloud storage.
The 16nm process will allow Micron to get more storage out of the same sized wafer (300mm) used for current processes, which in theory should mean flash memory that is not only smaller, but (in theory) cheaper.
A single wafer of 16nm NAND flash (just under 6TBs)
As Allyn further notes, the downside to the new 16nm NAND flash is a reduction in the number of supported PE cycles. Micron has not released specific information on this, but the new 16nm MLC flash is expected to have fewer than 1,000 P/E cycles. For comparison, 25nm and 20nm flash has P/E cycles of 3,000 and 1,000 respectively.
In simple terms, P/E (program-erase) cycles relate to the number of times that a specific portion of flash memory can be written to before wearing out. SSD manufacturers were able to work around this with the transition from 25nm to 20nm and still deliver acceptable endurance on consumer drives, and I expect that similar techniques will be used to do the same for 16nm flash. For example, manufactuers could enable compression that is used prior to writing out the data to the physical flash or over-provisioning the actual hardware versus the reported software capacity (ie a drive sold as a 100GB model that actually has 128GB of physical flash).
I don't think it will be a big enough jump that typical consumers wil have to worry too much about this, considering the vast majority of operations will be read operations and not writes. Despite the reduction in P/E cycles, SSDs with 16nm NAND MLC flash will still likely out-last a typical mechanical hard drive.
What do you think about the Micron announcement?
The full press release can be found below:
Subject: General Tech | November 30, 2012 - 01:38 PM | Jeremy Hellstrom
Tagged: nand, EMC, phase change memory
SSDs are not that old but already there is a challenge that must be overcome if it is to remain a viable storage medium. As Allyn has discussed many times in articles and on the podcast, as NAND process shrinks continue, the number of write cycles before failure drops which lowers the life expectancy of the drive even while it allows for high capacity chips and lower power consumption. Zahid Hussain is EMC's flash product division general manager and he is confident that his company will be able to do what Hynix, Samsung and others have so far been unable to do; work with Micron to replace the NAND chips with Phase Change Memory based chips. This type of chip is non-volatile and could also find its way into DIMMs as well. Read more at The Register.
"It is anticipated that, as NAND process geometries shrink beyond 15nm or so, the working life will fall off drastically, speed will slacken and the error checking and correction logic will become much more complicated. At that point, roughly, it is hoped, a post-NAND technology will be productised and deliver chips that are denser than flash, faster than flash, approaching DRAM speed, byte-addressable instead of block-addressable, and with a longer working life. That seems like a real big ask."
Here is some more Tech News from around the web:
- RIM announces updates to developer ecosystem programs @ The Register
- Updating the 2012 AnandTech SMB / SOHO NAS Testbed
- Guru3D Rig of the Month - November 2012
- Double Fine’s Brad Muir dishes out BRAZEN details @ Kitguru
- In Calculator Arms Race, Casio Fires Back: Color Touchscreen ClassPad @ Slashdot
Podcast #221 - Intel Clover Trail, AMD's Trinity Desktop APUs, the Samsung 840 SSD with TLC, and more!
Subject: General Tech | October 4, 2012 - 02:56 PM | Ken Addison
Tagged: trinity, TLD, ssd, Samsung, podcast, nand, clover trail, APU, a8, A10-5800k, a10, 830
PC Perspective Podcast #221 - 10/04/2012
Join us this week as we talk about Intel Clover Trail, AMD's Trinity Desktop APUs, the Samsung 840 SSD with TLC, and more!
The URL for the podcast is: http://pcper.com/podcast - Share with your friends!
- iTunes - Subscribe to the podcast directly through the Store
- RSS - Subscribe through your regular RSS reader
- MP3 - Direct download link to the MP3 file
Hosts: Ryan Shrout, Josh Walrath, Jeremy Hellstrom Allyn Malvantano, and Scott Michaud
Program length: 1:21:21
Podcast topics of discussion:
- Week in Reviews:
- 0:49:00 This podcast is brought to you by alxTech
- News items of interest:
- 1-888-38-PCPER or email@example.com
- http://twitter.com/ryanshrout and http://twitter.com/pcper
Biwin is a flash storage manufacturer founded in 1995 that holds headquarters in Shenzhen, China and specializing in USB, memory card, and SSD flash storage. They have 20 SMT assembly lines, ISO9001:2000 factories, and employ more than 50 skilled engineers. Recently, the company founded a subsidiary, Biwin America with headquarters in San Jose, California. The new company will further expand the company's SSD offerings by developing and producing advanced solid state drives for the Enterprise, Embedded, and Consumer markets.
Vice President of Worldwide Marketing for the newly founded Biwin America stated that the company "will be dedicated to developing flash storage solutions that deliver superior performance and reliability." He further noted that the team is very excited to bring new SSDs to the market.
A Biwin SATA 3 SSD
Biwin features 20 SMT (surface-mount technology) lines, die sorting, die packaging, and "sophisticated test and QC processes." They are bringing their experience with flash storage to bear on the US market as they prepare new and expanded SSD products that it will sell direct to OEMs as well as to consumers through authorized distributors.
More information on the company can be found here.
Subject: Memory | December 23, 2011 - 04:10 PM | Tim Verry
Tagged: supply, ram, price increase, nand, dram market, adata
Computer enthusiasts and OEMs alike have been living the dream of extremely cheap RAM modules; however, Adata CEO Simon Chen believes that the dream may be close to ending. In 2012, the DRAM manufacturers will start to cut production such that they are reducing supply and thus can charge more than they currently can (they have been producing DRAM consistently over the past couple years such that there has been more than enough supply and thus a lower cost). After the holiday season, PC OEMs will start to replenish their inventories and when they do, they will be increasing inventories to a months supply instead of a two week supply.
Chen notes that the four major manufacturers of DRAM chips including Elpida Memory, Hynix Semiconductor, Micron Technology, and Powerchip Technology have suffered from selling the chips at such reduced prices for so long. While DRAM chips produced on older manufacturing processes may still be sold below the cost of production, newer DRAM manufactured on the 30nm process "will rebound from the current bottom level to a level above cash-flow production cost."
In addition to the reduced production and newer process, the demand for DRAM in general is expected to decrease due to the rising popularity of mobile computers, Chen notes. Further, the decrease in desktop DRAM demand is balanced out by increased demand for server memory from data centers purchasing additional RAM direct from the manufacturers as the server OEMs charge a hefty premium for RAM. Due to the shake up in the industry, "many makers of DRAM modules have shifted business operation to other areas" like ruggedized memory and to producing NAND flash chips for SSDs.
Admittedly, the memory makers are walking a fine line between spinning down production and being accused of price fixing; however, the ride has been a good one for consumers for a while now and the manufacturers are likely getting tired of the razor thing profit margins. Chen's analysis of the situation may be correct in light of that fact, the new process technology allowing for better yields combined with generally lower production while the big OEMs will be buying up more RAM for their own inventories may well spell the end of being able to impulse buy tons of DDR3 RAM! What are your thoughts on both Chen's analysis of the price increase and the industry itself- do you think prices are likely to go up next year?
According to VR-Zone, Intel's newest enterprise series 710 Lyndonville solid state drives (SSD) will be launching soon in a mid-august time frame, and will be carrying a price-per-gigabyte metric that only a corporate expense account could love.
The Intel 311. The 710 series will have the same 2.5" form factor.
The new drives will come in 100GB, 200GB, and 300GB capacities and will be priced at approximately $650, $1250, and $1900 USD respectively. Featuring 25mm eMLC HET, the drives feature 64MB of cache, user-controllable over-provisioning up to 20% (which helps drive longevity by reserving more of the drive for replacement of worn out cells), and a SATA II 3.0Gbps connection. The SATA 3Gbps connection is not likely to bottleneck the drive as it will only feature 270MB/s read and 210MB/s write speeds.
The eMLC HET flash chips are higher quality MLC chips that Intel hopes will provide enterprise level SLC enduring without the higher cost of the SLC chips. Interestingly, the drives only carry a 3 year warranty that is then further impacted by the state of the E9 wear level indicator so that the warranty expires once the three years are up or the E9 indicator reaches 1, whichever comes first. The consumer grade Intel 320 drives on the other hand carry a longer 5 year warranty.
My aging X-25 drive remembers the days when Intel pushed for driving down the cost of SSDs; however, does Intel still remember that goal?