Subject: General Tech, Memory, Storage | May 26, 2017 - 10:14 PM | Tim Verry
Tagged: XPoint, Intel, HPC, DIMM, 3D XPoint
Intel recently teased a bit of new information on its 3D XPoint DIMMs and launched its first public demonstration of the technology at the SAP Sapphire conference where SAP’s HANA in-memory data analytics software was shown working with the new “Intel persistent memory.” Slated to arrive in 2018, the new Intel DIMMs based on the 3D XPoint technology developed by Intel and Micron will work in systems alongside traditional DRAM to provide a pool of fast, low latency, and high density nonvolatile storage that is a middle ground between expensive DDR4 and cheaper NVMe SSDs and hard drives. When looking at the storage stack, the storage density increases along with latency as it gets further away from the CPU. The opposite is also true, as storage and memory gets closer to the processor, bandwidth increases, latency decreases, and costs increase per unit of storage. Intel is hoping to bridge the gap between system DRAM and PCI-E and SATA storage.
According to Intel, system RAM offers up 10 GB/s per channel and approximately 100 nanoseconds of latency. 3D XPoint DIMMs will offer 6 GB/s per channel and about 250 nanoseconds of latency. Below that is the 3D XPoint-based NVMe SSDs (e.g. Optane) on a PCI-E x4 bus where they max out the bandwidth of the bus at ~3.2 GB/s and 10 microseconds of latency. Intel claims that non XPoint NVMe NAND solid state drives have around 100 microsecomds of latency, and of course, it gets worse from there when you go to NAND-based SSDs or even hard drives hanging of the SATA bus.
Intel’s new XPoint DIMMs have persistent storage and will offer more capacity that will be possible and/or cost effective with DDR4 DRAM. In giving up some bandwidth and latency, enterprise users will be able to have a large pool of very fast storage for storing their databases and other latency and bandwidth sensitive workloads. Intel does note that there are security concerns with the XPoint DIMMs being nonvolatile in that an attacker with physical access could easily pull the DIMM and walk away with the data (it is at least theoretically possible to grab some data from RAM as well, but it will be much easier to grab the data from the XPoint sticks. Encryption and other security measures will need to be implemented to secure the data, both in use and at rest.
Interestingly, Intel is not positioning the XPoint DIMMs as a replacement for RAM, but instead as a supplement. RAM and XPoint DIMMs will be installed in different slots of the same system and the DDR4 RAM will be used for the OS and system critical applications while the XPoint pool of storage will be used for storing data that applications will work on much like a traditional RAM disk but without needing to load and save the data to a different medium for persistent storage and offering a lot more GBs for the money.
While XPoint is set to arrive next year along with Cascade Lake Xeons, it will likely be a couple of years before the technology takes off. Supporting it is going to require hardware and software support for the workstations and servers as well as developers willing to take advantage of it when writing their specialized applications. Fortunately, Intel started shipping the memory modules to its partners for testing earlier this year. It is an interesting technology and the DIMM solution and direct CPU interface will really let the 3D XPoint memory shine and reach its full potential. It will primarily be useful for the enterprise, scientific, and financial industries where there is a huge need for faster and lower latency storage that can accommodate massive (multiple terabyte+) data sets that continue to get larger and more complex. It is a technology that likely will not trickle down to consumers for a long time, but I will be ready when it does. In the meantime, I am eager to see what kinds of things it will enable the big data companies and researchers to do! Intel claims it will not only be useful at supporting massive in-memory databases and accelerating HPC workloads but for things like virtualization, private clouds, and software defined storage.
What are your thoughts on this new memory tier and the future of XPoint?
- Intel Has Started Shipping Optane Memory Modules
- Intel Optane Memory 32GB Review - Faster Than Lightning
- A Closer Look at Intel's Optane SSD DC P4800X Enterprise SSD Performance
Subject: Storage | May 24, 2017 - 08:45 PM | Allyn Malventano
Tagged: SolidScale, NVMf, NVMe, micron, fabric, Cassandra
A few weeks back, I was briefed on Micron’s new SolidScale Architecture. This is essentially Micron’s off-the-shelf solution that ties together a few different technologies in an attempt to consolidate large pools of NVMe storage into a central location that can then be efficiently segmented and distributed among peers and clients across the network.
Traditionally it has been difficult to effectively utilize large numbers of SSDs in a single server. The combined IOPS capabilities of multiple high-performance PCIe SSDs can quickly saturate the available CPU cores of the server due to kernel/OS IO overhead incurred with each request. As a result, a flash-based network server would be bottlenecked by the server CPU during high IOPS workloads. There is a solution to this, and it’s simpler than you might think: Bypass the CPU!
Intro and Upgrading the PS4 Pro Hard Drive
When Sony launched the PS4 Pro late last year, it introduced an unusual mid-cycle performance update to its latest console platform. But in addition to increased processing and graphics performance, Sony also addressed one of the original PS4's shortcomings: the storage bus.
The original, non-Pro PlayStation 4 utilized a SATA II bus, capping speeds at 3Gb/s. This was more than adequate for keeping up with the console's stock hard drive, but those who elected to take advantage of Sony's user-upgradeable storage policy and install an SSD faced the prospect of a storage bus bottleneck. As we saw in our original look at upgrading the PS4 Pro with a solid state drive, the SSD brought some performance improvements in terms of load times, but these improvements weren't always as impressive as we might expect.
We therefore set out to see what performance improvements, if any, could be gained by the inclusion of SATA III in the PS4 Pro, and if this new Pro model makes a stronger case for users to shell out even more cash for a high capacity solid state drive. We weren't the only ones interested in this test. Digital Foundry conducted their own tests of the PS4 Pro's SATA III interface. They found that while a solid state drive in the PS4 Pro clearly outperformed the stock hard drive in the original PS4, it generally didn't offer much improvement over the SATA II-bottlenecked SSD in the original PS4, or even, in some cases, the stock HDD in the PS4 Pro.
But we noticed a major issue with Digital Foundry's testing process. For their SSD tests, they used the OCZ Trion 100, an older SSD with relatively mediocre performance compared to its latest competitors. The Trion 100 also has a relatively low write endurance and we therefore don't know the condition and performance characteristics of Digital Foundry's drive.
To address these issues, we conducted our tests with a brand new 1TB Samsung 850 EVO. While far from the cheapest, or even most reasonable option for a PS4 Pro upgrade, our aim is to assess the "best case scenario" when it comes to SSD performance via the PS4 Pro's SATA III bus.
Subject: Storage | May 18, 2017 - 04:26 PM | Jeremy Hellstrom
Tagged: corsair, corsair force mp500, mp500, M.2, NVMe, PS5007-E7, toshiba mlc
Corsair have entered the NVMe market with a new Force Series product, the MP500 drive which contains Toshiba's 15-nm MLC, run by the popular Phison PS5007-E7 controller. There is a difference which The Tech Report noticed right away, that sticker is for more than just show, it hides a layer of heat-dissipating copper inside just like we have seen in Samsung products. It may have been the sticker, or some sort of secret sauce which Corsair added but the MP500's performance pulled ahead of Patriot's Hellfire SSD overall. Read the full review to see where the drive showed the most performance differential.
"Corsair is throwing its hat into the NVMe SSD ring with the Force Series MP500 drive. We subjected this gumstick to our testing gauntlet to see how well the 240GB version fares against the rest of the formidable NVMe field."
Here are some more Storage reviews from around the web:
- Toshiba N300 6TB NAS HDD @ eTeknix
- ASUSTOR AS1004T NAS Server @ NikKTech
- ioSafe 216 2-Bay NAS @ Kitguru
- LaCie D2 Thunderbolt 3 10TB Professional Storage Drive Review @ NikKTech
- LaCie d2 Thunderbolt 3 10TB @ Kitguru
- Thecus N2810 Pro 2-Bay NAS @ techPowerUp
Subject: Storage | May 17, 2017 - 09:57 PM | Allyn Malventano
Tagged: western digital, wdc, WD, Red Pro, red, NAS, helium, HelioSeal, hdd, Hard Drive, 10TB
Western Digital increased the capacity of their Red and Red Pro NAS hard disk lines to 10TB. Acquiring the Helioseal technology via their HGST acquisition, which enables Helium filled hermetically sealed drives of even higher capacities, WD expanded the Red lines to 8TB (our review of those here) using that tech. Helioseal has certainly proven itself, as over 15 million such units have shipped so far.
We knew it was just a matter of time before we saw a 10TB Red and Red Pro, as it has been some time since the HGST He10 launched, and Western Digital's own 10TB Gold (datacenter) drive has been shipping for a while now.
- Red 10TB: $494
- Red Pro 10TB: $533
MSRP pricing looks a bit high based on the lower cost/GB of the 8TB model, but given some time on the market and volume shipping, these should come down to match parity with the lesser capacities.
Press blast appears after the break.
Subject: Storage | April 24, 2017 - 05:20 PM | Jeremy Hellstrom
Tagged: XPoint, srt, rst, Optane Memory, Optane, Intel, hybrid, CrossPoint, cache, 32GB, 16GB
At $44 for 16GB or $77 for a 32GB module Intel's Optane memory will cost you less in total for an M.2 SSD, though a significantly higher price per gigabyte. The catch is that you need to have a Kaby Lake Core system to be able to utilize Optane, which means you are unlikely to be using a HDD. Al's test show that Optane will also benefit a system using an SSD, reducing latency noticeably although not as significantly as with a HDD.
The Tech Report tested it differently, by sourcing a brand new desktop system with Kaby Lake Core APU that did not ship with an SSD. Once installed, the Optane drive enabled the system to outpace an affordable 480GB SSD in some scenarios; very impressive for a HDD. They also did peek at the difference Optane makes when paired with aforementioned affordable SSD in their full review.
"Intel's Optane Memory tech purports to offer most of the responsiveness of an SSD to systems whose primary storage device is a good old hard drive. We put a 32GB stick of Optane Memory to the test to see whether it lives up to Intel's claims."
Here are some more Storage reviews from around the web:
- Intel Optane Memory Review - 1.4GB/s Speed & 300K IOPS for $44 @ The SSD Review
- The Intel Optane Memory Module Review @ Hardware Canucks
- Kingston DCP1000 NVMe SSD Reaches 7GB/s @ Kitguru
- WD Blue 1,000 GiB SSD @ Hardware Secrets
- Synology DiskStation DS916+ 4-Bay NAS @ Kitguru
- Drobo 5N2 NAS @ Kitguru
- Kingston Ultimate GT 2TB Flash Drive @ The SSD Review
- Toshiba X300 6TB HDD @ Kitguru
Introduction, Specifications, and Requirements
Finally! Optane Memory sitting in our lab! Sure, it’s not the mighty P4800X we remotely tested over the past month, but this is right here, sitting on my desk. It’s shipping, too, meaning it could be sitting on your desk (or more importantly, in your PC) in just a matter of days.
The big deal about Optane is that it uses XPoint Memory, which has fast-as-lightning (faster, actually) response times of less than 10 microseconds. Compare this to the fastest modern NAND flash at ~90 microseconds, and the differences are going to add up fast. What’s wonderful about these response times is that they still hold true even when scaling an Optane product all the way down to just one or two dies of storage capacity. When you consider that managing fewer dies means less work for the controller, we can see latencies fall even further in some cases (as we will see later).
Introduction and Specifications
XPoint. Optane. QuantX. We've been hearing these terms thrown around for two years now. A form of 3D stackable non-volatile memory that promised 10x the density of DRAM and 1000x the speed and endurance of NAND. These were bold statements, and over the following months, we would see them misunderstood and misconstrued by many in the industry. These misconceptions were further amplified by some poor demo choices on the part of Intel (fortunately countered by some better choices made by Micron). Fortunately cooler heads prevailed as Jim Handy and other industry analysts helped explain that a 1000x improvement at the die level does not translate to the same improvement at the device level, especially when the first round of devices must comply with what will soon become a legacy method of connecting a persistent storage device to a PC.
Did I just suggest that PCIe 3.0 and the NVMe protocol - developed just for high-speed storage, is already legacy tech? Well, sorta.
That 'Future NVM' bar at the bottom of that chart there was a 2-year old prototype iteration of what is now Optane. Note that while NVMe was able to shrink down the yellow bar a bit, as you introduce faster and faster storage, the rest of the equation (meaning software, including the OS kernel) starts to have a larger and larger impact on limiting the ultimate speed of the device.
NAND Flash simplified schematic (via Wikipedia)
Before getting into the first retail product to push all of these links in the storage chain to the limit, let's explain how XPoint works and what makes it faster. Taking random writes as an example, NAND Flash (above) must program cells in pages and erase cells in blocks. As modern flash has increased in capacity, the sizes of those pages and blocks have scaled up roughly proportionally. At present day we are at pages >4KB and block sizes in the megabytes. When it comes to randomly writing to an already full section of flash, simply changing the contents of one byte on one page requires the clearing and rewriting of the entire block. The difference between what you wanted to write and what the flash had to rewrite to accomplish that operation is called the write amplification factor. It's something that must be dealt with when it comes to flash memory management, but for XPoint it is a completely different story:
XPoint is bit addressible. The 'cross' structure means you can select very small groups of data via Wordlines, with the ultimate selection resolving down to a single bit.
Since the programmed element effectively acts as a resistor, its output is read directly and quickly. Even better - none of that write amplification nonsense mentioned above applies here at all. There are no pages or blocks. If you want to write a byte, go ahead. Even better is that the bits can be changed regardless of their former state, meaning no erase or clear cycle must take place before writing - you just overwrite directly over what was previously stored. Is that 1000x faster / 1000x more write endurance than NAND thing starting to make more sense now?
Ok, with all of the background out of the way, let's get into the meat of the story. I present the P4800X:
ADATA has added another line of M.2 PCIe SSDs to their catalog with the XPG SX7000. These drives support NVMe and claim up to 1800 MB/s sequential read performance and 850 MB/s sequential write performance, with both tests measured on CrystalDiskMark at a queue depth of 32. Interestingly enough, their ATTO sequential write results, 860 MB/s, exceed their claimed maximum. Again, each of these numbers are provided by ADATA, so it’s still up to third-parties (like us) to verify. That said, ADATA provided a lot of information in their performance chart, which is nice to see.
The spec sheet (pdf) provides performance results for three SKUs: 128GB, 256GB, and 512GB. A fourth model (if you guessed 1TB, then you would be right) is also acknowledged, but not elaborated upon. These are all based on 3D TLC flash, with some undefined amount of SLC cache.
Pricing and availability are TBD, but it will come with a 5 year warranty.
Subject: Storage | April 7, 2017 - 07:01 AM | Scott Michaud
Tagged: WD, ssd, external ssd
Western Digital has just announced the My Passport SSD line of portable solid state hard drives. As you might expect, the major advantage of SSD-based portable storage is speed. This one connects with a USB Type-C port and is rated at up to 515 MB/s, although that hasn’t been benchmarked yet. The drives also support hardware, 256-bit AES encryption via their security software.
According to Best Buy, the 256GB model ($99.99 USD) is already sold out, but the 512GB model ($199.99) and the 1TB model ($399.99) are both still available for the 14th of April.