Subject: Storage | April 18, 2016 - 12:32 PM | Sebastian Peak
Tagged: storage, sony, optical disc archive, optical disc, ODA, hard drives, backup, Archival
Sony has developed a higher-capacity version of their Optical Disc Archive (ODA), which now allows up to 3.3 TB of archival storage with the promise of 100-year retention.
Sony ODS-D280U (Image credit: Sony via Computer Base)
Of course the viability of such a system in the next century is unknown, and a working cartridge (which is similar to the multi-CD systems found in cars a few years ago) would be needed to access the data. The idea is certainly interesting considering the potential for failure with traditional hard drives, though hard drives are relatively inexpensive and offer more utility, unlike the write-once Sony ODA cartridges.
Cartridge exploded view (Image credit: Sony via Computer Base)
For those seeking pure read-only archival storage, the higher capacity of the second-generation Sony ODA at least brings it closer to parity with current hard drive storage.
Subject: Storage | August 11, 2015 - 08:40 PM | Allyn Malventano
Tagged: toshiba, ssd, FMS 2015, flash, BiCS, Archive, Archival, 3d
We occasionally throw around the '3-bit MLC' (Multi Level Cell) term in place of 'TLC' (Triple Level Cell) when talking about flash memory. Those terms are interchangeable, but some feel it is misleading as the former still contains the term MLC. At Toshiba's keynote today, they showed us why the former is important:
Photo source: Sam Chen of Custom PC Review
That's right - QLC (Quadruple Level Cell), which is also 4-bit MLC, has been mentioned by Toshiba. As you can see at the right of that slide, storing four bits in a single flash cell means there are *sixteen* very narrow voltage ranges representing the stored data. That is a very hard thing to do, and even harder to do with high performance (programming/writing would take a relatively long time as the circuitry nudges the voltages to such a precise level). This is why Toshiba pitched this flash as a low cost solution for archival purposes. You wouldn't want to use this type of flash in a device that was written constantly, since the channel materials wearing out would have a much more significant effect on endurance. Suiting this flash to be written only a few times would keep it in a 'newer' state that would be effective for solid state data archiving.
The 1x / 0.5x / 6x figures appearing in the slide are meant to compare relative endurance to Toshiba's own planar 15nm flash. The figures suggest that Toshiba's BiCS 3D flash is efficient enough to go to QLC (4-bit) levels and still maintain a higher margin than their current MLC (2-bit) 2D flash.
More to follow as we continue our Flash Memory Summit coverage!
Subject: Storage, Shows and Expos | September 9, 2014 - 04:51 PM | Allyn Malventano
Tagged: WDC< Western Digital, WD, idf 2014, idf, hdd, Cold, Archival, Ae
We talked about helium filled, shingled HDD's from HGST earlier today. Helium may give you reduced power demands, but at the added expensive of hermetically sealed enclosures over conventional HDD's. Shingling may give added capacity, but at the expense of being forced into specific writing methods. Now we know Western Digital's angle into archival / cold storage:
..so instead of going with higher cost newer technologies, WD is taking their consumer products and making them more robust. They are also getting rid of the conventional thinking of capacity increments and are moving to 100GB increments. The idea is that once a large company or distributor has qualified a specific HDD model on their hardware, that model will stick around for a while, but be continued at an increased capacity as platter density yields increase over time. WD has also told me that capacities may even be mixed an matched within a 20-box of drives, so long as the average capacity matches the box label. This works in the field of archival / cold storage for a few reasons:
- Archival storage systems generally do not use conventional RAID (where an entire array of matching capacity disks are spinning simultaneously). Drives are spun up and written to individually, or spun up individually to service the occasional read request. This saves power overall, and it also means the individual drives can vary in capacity with no ill effects.
- Allowing for variable capacity binning helps WD ship more usable platters/drives overall (i.e. not rejecting drives that can't meet 6TB). This should drive overall costs down.
- Increasing capacity by only a few hundred GB per drive turns into *huge* differences in cost when you scale that difference up to the number of drives you would need to handle a very large total capacity (i.e. Exabytes).
So the idea here is that WD is choosing to stick with what they do best, which they can potentially do for even cheaper than their consumer products. That said, this is really meant for enterprise use and not as a way for a home power user to save a few bucks on a half-dozen drives for their home NAS. You really need an infrastructure in place that can handle variable capacity drives seamlessly. While these drives do not employ SMR to get greater capacity, that may work out as a bonus, as writes can be performed in a way that all systems are currently compatible with (even though I suspect they will be tuned more for sequential write workloads).
Here's an illustration of this difference:
The 'old' method meant that drives on the left half of the above bell curve would have to be sold as 5TB units.
With the 'new' method, drives can be sold based on a spec closer to their actual capacity yield. For a given model, shipping capacities would increase as time goes on (top to bottom of the above graphic).
To further clarify what is meant by the term 'cold storage' - the data itself is cold, as in rarely if ever accessed:
Examples of this would be Facebook posts / images from months or years ago. That data may be rarely touched, but it needs to be accessible enough to be browsed to via the internet. The few second archival HDD spinup can handle this sort of thing, while a tape system would take far too long and would likely timeout that data request.