Kingston's DCP1000 NVMe PCIe SSD; fast and outside most people's budgets just like a race car

Subject: Storage | June 12, 2017 - 03:42 PM |
Tagged: kingston, DCP1000, enterprise ssd, NVMe, PCIe SSD

The Kingston DCP1000 NVMe PCIe SSD comes in 800GB, 1.6TB, and 3.2TB though as it is an Enterprise class drive even the smallest size will cost you over $1000.  Even with a price beyond the budget of almost all enthusiasts it is interesting to see the performance of this drive, especially as Kitguru's testing showed it to be faster than the Intel D P3608.  Kitguru cracked the 1.6TB card open to see how it worked and within found four Kingston 400GB NVMe M.2 SSDs, connected by a PLX PEX8725 24-lane, 10-port PCIe 3.0 switch which then passes the data onto the cards PCIe 3.0 x8 connector.  Each of those 400GB SSDs have their own PhisonPS5007-11 eight channel quad-core controller which leads to very impressive performance.  They did have some quibbles about the performance consistency of the drive; however it is something they have seen on most drives of this class and not something specific to Kingston's drive.

Kingston-DCP1000-Back.jpg

"Move over Intel DC P3608, we have a new performance king! In today’s testing, it was able to sustain sequential read and write speeds of 7GB/s and 6GB/s, respectively! Not only that, but it is able to deliver over 1.1million IOPS with 4KB random read performance and over 180K for write."

Here are some more Storage reviews from around the web:

Storage

 

Introduction, How PCM Works, Reading, Writing, and Tweaks

I’ve seen a bit of flawed logic floating around related to discussions about 3D XPoint technology. Some are directly comparing the cost per die to NAND flash (you can’t - 3D XPoint likely has fewer fab steps than NAND - especially when compared with 3D NAND). Others are repeating a bunch of terminology and element names without taking the time to actually explain how it works, and far too many folks out there can't even pronounce it correctly (it's spoken 'cross-point'). My plan is to address as much of the confusion as I can with this article, and I hope you walk away understanding how XPoint and its underlying technologies (most likely) work. While we do not have absolute confirmation of the precise material compositions, there is a significant amount of evidence pointing to one particular set of technologies. With Optane Memory now out in the wild and purchasable by folks wielding electron microscopes and mass spectrometers, I have seen enough additional information come across to assume XPoint is, in fact, PCM based.

XPoint.png

XPoint memory. Note the shape of the cell/selector structure. This will be significant later.

While we were initially told at the XPoint announcement event Q&A that the technology was not phase change based, there is overwhelming evidence to the contrary, and it is likely that Intel did not want to let the cat out of the bag too early. The funny thing about that is that both Intel and Micron were briefing on PCM-based memory developments five years earlier, and nearly everything about those briefings lines up perfectly with what appears to have ended up in the XPoint that we have today.

comparison.png

Some die-level performance characteristics of various memory types. source

The above figures were sourced from a 2011 paper and may be a bit dated, but they do a good job putting some actual numbers with the die-level performance of the various solid state memory technologies. We can also see where the ~1000x speed and ~1000x endurance comparisons with XPoint to NAND Flash came from. Now, of course, those performance characteristics do not directly translate to the performance of a complete SSD package containing those dies. Controller overhead and management must take their respective cuts, as is shown with the performance of the first generation XPoint SSD we saw come out of Intel:

gap.png

The ‘bridging the gap’ Latency Percentile graph from our Intel SSD DC P4800X review.
(The P4800X comes in at 10us above).

There have been a few very vocal folks out there chanting 'not good enough', without the basic understanding that the first publicly available iteration of a new technology never represents its ultimate performance capabilities. It took NAND flash decades to make it into usable SSDs, and another decade before climbing to the performance levels we enjoy today. Time will tell if this holds true for XPoint, but given Micron's demos and our own observed performance of Intel's P4800X and Optane Memory SSDs, I'd argue that it is most certainly off to a good start!

XPoint Die.jpg

A 3D XPoint die, submitted for your viewing pleasure (click for larger version).

You want to know how this stuff works, right? Read on to find out!

ASUS X299 Enables Intel Virtual RAID on CPU - RAID-0 up to 20 SSDs!

Subject: Storage | May 31, 2017 - 08:58 PM |
Tagged: x299, VROC, Virtual RAID on CPU, raid, Intel, asus

Ken and I have been refreshing our Google search results ever since seeing the term 'VROC' slipped into the ASUS press releases. Virtual RAID on CPU (VROC) is a Skylake-X specific optional feature that is a carryover from Intel's XEON parts employing RSTe to create a RAID without the need for the chipset to tie it all together.

Well, we finally saw an article pop up over at PCWorld, complete with a photo of the elusive Hyper M.2 X16 card:

dsc06473-100724331-orig.jpg

The theory is that you will be able to use the 1, 2, or 3 M.2 slots of an ASUS X299 motherboard, presumably passing through the chipset (and bottlenecked by DMI), or you can shift the SSDs over to a Hyper M.2 X16 card and have four piped directly to the Skylake-X CPU. If you don't have your lanes all occupied by GPUs, you can even add additional cards to scale up to a max theoretical 20-way RAID-0 supporting a *very* theoretical 128GBps.

A couple of gotchas here:

  • Only works with Skylake-X (not Kaby Lake-X)
  • RAID-1 and RAID-5 are only possible with a dongle (seriously?)
  • VROC is supposedly only bootable when using Intel SSDs (what?)

Ok, so the first one is understandable given Kaby Lake-X will only have 16 PCIe lanes direclty off of the CPU.

dsc06470-100724330-orig.jpg

The second is, well, annoying, but understandable once you consider that some server builders may want to capitalize on the RSTe-type technology without having to purchase server hardware. It's still a significant annoyance, because how long has it been since anyone has had to deal with a freaking hardware dongle to unlock a feature on a consumer part. That said, most enthusiasts are probably fine with RAID-0 for their SSD volume, given they would be going purely for increased performance.

dsc06542-100724333-orig.jpg

The third essentially makes this awesome tech dead on arrival. Requiring only Intel branded M.2 SSDs for VROC bootability is a nail in the coffin. Enthusiasts are not going to want to buy 4 or 8 (or more) middle of the road Intel SSDs (the only M.2 NAND SSD available from Intel is the 600p) for their crazy RAID - they are going to go with something faster, and if that can't boot, that's a major issue.

More to follow as we learn more. We'll keep a lookout and keep you posted as we get official word from Intel on VROC!

Source: PCWorld

COMPUTEX 2017: QNAP Unveils World's First Ryzen-based NAS

Subject: Storage, Shows and Expos | May 31, 2017 - 01:46 PM |
Tagged: TS-x77, amd, ryzen, qnap, NAS, computex 2017

QNAP are providing a sneak peak of a new line of NAS devices, powered by AMD's Ryzen processors.  The TS-x77 series will come in 6, 8, and 12-bay models with an AMD Ryzen 7 1700 or Ryzen 5 1600 or 1400 processor with up to 8, 16, 32 or 64GB DDR4 RAM dependant on the model. 

top.jpg

The devices support RAID 0/1/5/6/10/50/60, RAID 1/5/6/10/50/60 + spare, single and JBOD, which support AES-NI encryption acceleration.  Internally there are quite a lot of opportunities to customize your NAS, on all models you will find a pair of M.2 2242/2260/2280/22110 SATA 6 Gb/s SSD slots for your hot storage and depending on the model you will have a mix of 2.5" and dual 2.5/3.5" drive bays for your SSDs or HDDs. 

That is not the only possibilities for expansion in these NAS devices, all models contain three PCIe 3.0, one 8x slot and two 4x which you can use for a PCIe SSD, 10GbE or 40GbE network cards or perhaps even a GPU for local transcoding.  Externally you have four Gigabit ethernet connectors, two USB 3.1 ports, one Type-C and one Type-A as well as five USB 3.0 ports.

ts1277.PNG

These will not be available until Q3, so we won't be able to review it for a while but rest assured that we are at least as interesting in seeing the performance of Ryzen in a NAS as you are.

Source: QNAP

Computex 2017: Toshiba Launches XG5 NVMe Client SSD With 64-Layer BiCS Flash

Subject: Storage | May 30, 2017 - 09:00 AM |
Tagged: toshiba, ssd, ocz, NVMe, nand, M.2, computex 2017, BiCS, 64-Layer

Last night we saw WD launch the first client SSDs with 64-layer NAND Flash, but recall that WD/SanDisk is in partnership with Toshiba to produce this new gen 3 BiCS memory, which means Toshiba is also launching their own product wrapped around this new high-density flash:

XG5_front_angled_2_small.png

Enter the Toshiba XG5. It is certainly coming on strong here, as evidenced by the specs:

specs.png

Unlike the WD/SanDisk launch, the BiCS flash on this Toshiba variant sits behind an NVMe SSD controller, with stated read speeds at 3GB/s and writes just over 2 GB/s. We don't yet have random performance figures, but we expect it to certainly be no slouch given the expected performance of this newest generation of flash memory. Let's take a quick look at some of the high points there:

BiCS.png

Alright, so we have the typical things you'd expect, like better power efficiency and higher endurance, but there is a significant entry there under the performance category - 1-shot, full sequence programming. This is a big deal, since writing to flash memory is typically done in stages, with successive program cycles nudging cell voltages closer to their targets with each pass. This takes time and is one of the main things holding back the write speeds of NAND flash. This new BiCS is claimed to be able to successfully write in a single program cycle, which should translate to noticeable improvements in write latency.

BiCS-FMS.png

Another thing helping with writes is that the XG5 will have its BiCS flash operating in a hybrid mode, meaning these are TLC SSDs with an SLC cache. We do not have confirmed cache sizes to report, but it's a safe bet that they will be similar to competing products.

We don't yet have pricing info, but we do know that the initial capacity offerings will start with 256GB, 512GB, and 1TB offerings. The XG5 is launching in the OEM channel in the second half of 2017. While this one is an OEM product, remember that OCZ is Toshiba's brand for client SSDs, so there's a possibility we may see a retail variant appear under that name in the future.

Press blast after the break.

Source: Toshiba

Computex 2017: Western Digital Launches Client SSDs Sporting 64-Layer NAND

Subject: Storage | May 29, 2017 - 11:42 PM |
Tagged: western digital, wdc, WD, Ultra, ssd, sandisk, nand, computex 2017, Blue, BiCS, 3d

Western Digital bought SanDisk nearly two years ago, but we had not really seen any products jointly launched under both brand labels. Until today:

PRN-Graphic_3D-NAND.jpg

The WD Blue 3D NAND SATA SSD and SanDisk Ultra 3D SSD are both products containing identical internals. Specifically, these are the first client SSDs built with 64-layer 3D NAND technology. Some specs:

  • Sequential read: 560 MB/s
  • Sequential write: 530 MB/s
  • Capacity: 250GB, 500GB, 1TB, 2TB
  • Form factor: 2.5" (WD and Sandisk), M.2 (SATA) 2280 (WD only)

MSRP's start at $99.99 for the 250GB models of all flavors (2.5" / M.2 SATA), and all products will ship with a 3-year warranty.

It might seem odd that we see an identical product shipped under two different brands owned by the same company, but WD is likely leveraging the large OEM relationship held by SanDisk. I'm actually curious to see how this pans out long term because it is a bit confusing at present.

Full press blast after the break:

Intel Persistent Memory Using 3D XPoint DIMMs Expected Next Year

Subject: General Tech, Memory, Storage | May 26, 2017 - 10:14 PM |
Tagged: XPoint, Intel, HPC, DIMM, 3D XPoint

Intel recently teased a bit of new information on its 3D XPoint DIMMs and launched its first public demonstration of the technology at the SAP Sapphire conference where SAP’s HANA in-memory data analytics software was shown working with the new “Intel persistent memory.” Slated to arrive in 2018, the new Intel DIMMs based on the 3D XPoint technology developed by Intel and Micron will work in systems alongside traditional DRAM to provide a pool of fast, low latency, and high density nonvolatile storage that is a middle ground between expensive DDR4 and cheaper NVMe SSDs and hard drives. When looking at the storage stack, the storage density increases along with latency as it gets further away from the CPU. The opposite is also true, as storage and memory gets closer to the processor, bandwidth increases, latency decreases, and costs increase per unit of storage. Intel is hoping to bridge the gap between system DRAM and PCI-E and SATA storage.

Intel persistent memory DIMM.jpg

According to Intel, system RAM offers up 10 GB/s per channel and approximately 100 nanoseconds of latency. 3D XPoint DIMMs will offer 6 GB/s per channel and about 250 nanoseconds of latency. Below that is the 3D XPoint-based NVMe SSDs (e.g. Optane) on a PCI-E x4 bus where they max out the bandwidth of the bus at ~3.2 GB/s and 10 microseconds of latency. Intel claims that non XPoint NVMe NAND solid state drives have around 100 microsecomds of latency, and of course, it gets worse from there when you go to NAND-based SSDs or even hard drives hanging of the SATA bus.

Intel’s new XPoint DIMMs have persistent storage and will offer more capacity that will be possible and/or cost effective with DDR4 DRAM. In giving up some bandwidth and latency, enterprise users will be able to have a large pool of very fast storage for storing their databases and other latency and bandwidth sensitive workloads. Intel does note that there are security concerns with the XPoint DIMMs being nonvolatile in that an attacker with physical access could easily pull the DIMM and walk away with the data (it is at least theoretically possible to grab some data from RAM as well, but it will be much easier to grab the data from the XPoint sticks. Encryption and other security measures will need to be implemented to secure the data, both in use and at rest.

Intel Slide XPoint Info.jpg

Interestingly, Intel is not positioning the XPoint DIMMs as a replacement for RAM, but instead as a supplement. RAM and XPoint DIMMs will be installed in different slots of the same system and the DDR4 RAM will be used for the OS and system critical applications while the XPoint pool of storage will be used for storing data that applications will work on much like a traditional RAM disk but without needing to load and save the data to a different medium for persistent storage and offering a lot more GBs for the money.

While XPoint is set to arrive next year along with Cascade Lake Xeons, it will likely be a couple of years before the technology takes off. Supporting it is going to require hardware and software support for the workstations and servers as well as developers willing to take advantage of it when writing their specialized applications. Fortunately, Intel started shipping the memory modules to its partners for testing earlier this year. It is an interesting technology and the DIMM solution and direct CPU interface will really let the 3D XPoint memory shine and reach its full potential. It will primarily be useful for the enterprise, scientific, and financial industries where there is a huge need for faster and lower latency storage that can accommodate massive (multiple terabyte+) data sets that continue to get larger and more complex. It is a technology that likely will not trickle down to consumers for a long time, but I will be ready when it does. In the meantime, I am eager to see what kinds of things it will enable the big data companies and researchers to do! Intel claims it will not only be useful at supporting massive in-memory databases and accelerating HPC workloads but for things like virtualization, private clouds, and software defined storage.

What are your thoughts on this new memory tier and the future of XPoint?

Also read:

Source: Intel

Micron Launches SolidScale Platform Architecture, Consolidates NVMe in the Datacenter

Subject: Storage | May 24, 2017 - 08:45 PM |
Tagged: SolidScale, NVMf, NVMe, micron, fabric, Cassandra

A few weeks back, I was briefed on Micron’s new SolidScale Architecture. This is essentially Micron’s off-the-shelf solution that ties together a few different technologies in an attempt to consolidate large pools of NVMe storage into a central location that can then be efficiently segmented and distributed among peers and clients across the network.

Traditionally it has been difficult to effectively utilize large numbers of SSDs in a single server. The combined IOPS capabilities of multiple high-performance PCIe SSDs can quickly saturate the available CPU cores of the server due to kernel/OS IO overhead incurred with each request. As a result, a flash-based network server would be bottlenecked by the server CPU during high IOPS workloads. There is a solution to this, and it’s simpler than you might think: Bypass the CPU!

blog-image-c.jpg

Read on for our deeper dive into Micron's SolidScale technology!

Source: Micron
Author:
Subject: Storage
Manufacturer: Sony
Tagged: ssd, ps4 pro, ps4, consoles

Intro and Upgrading the PS4 Pro Hard Drive

When Sony launched the PS4 Pro late last year, it introduced an unusual mid-cycle performance update to its latest console platform. But in addition to increased processing and graphics performance, Sony also addressed one of the original PS4's shortcomings: the storage bus.

The original, non-Pro PlayStation 4 utilized a SATA II bus, capping speeds at 3Gb/s. This was more than adequate for keeping up with the console's stock hard drive, but those who elected to take advantage of Sony's user-upgradeable storage policy and install an SSD faced the prospect of a storage bus bottleneck. As we saw in our original look at upgrading the PS4 Pro with a solid state drive, the SSD brought some performance improvements in terms of load times, but these improvements weren't always as impressive as we might expect.

ps4-pro-ssd-vs-hdd.jpg

We therefore set out to see what performance improvements, if any, could be gained by the inclusion of SATA III in the PS4 Pro, and if this new Pro model makes a stronger case for users to shell out even more cash for a high capacity solid state drive. We weren't the only ones interested in this test. Digital Foundry conducted their own tests of the PS4 Pro's SATA III interface. They found that while a solid state drive in the PS4 Pro clearly outperformed the stock hard drive in the original PS4, it generally didn't offer much improvement over the SATA II-bottlenecked SSD in the original PS4, or even, in some cases, the stock HDD in the PS4 Pro.

ocz-trion-100.jpg

But we noticed a major issue with Digital Foundry's testing process. For their SSD tests, they used the OCZ Trion 100, an older SSD with relatively mediocre performance compared to its latest competitors. The Trion 100 also has a relatively low write endurance and we therefore don't know the condition and performance characteristics of Digital Foundry's drive.

samsung-850-evo-1tb.jpg

To address these issues, we conducted our tests with a brand new 1TB Samsung 850 EVO. While far from the cheapest, or even most reasonable option for a PS4 Pro upgrade, our aim is to assess the "best case scenario" when it comes to SSD performance via the PS4 Pro's SATA III bus.

Continue reading our analysis of PS4 Pro loading times with an SSD upgrade!

Corsair Forces its way into the NVMe market with their MP500 M.2 SSD

Subject: Storage | May 18, 2017 - 04:26 PM |
Tagged: corsair, corsair force mp500, mp500, M.2, NVMe, PS5007-E7, toshiba mlc

Corsair have entered the NVMe market with a new Force Series product, the MP500 drive which contains Toshiba's 15-nm MLC, run by the popular Phison PS5007-E7 controller.  There is a difference which The Tech Report noticed right away, that sticker is for more than just show, it hides a layer of heat-dissipating copper inside just like we have seen in Samsung products.  It may have been the sticker, or some sort of secret sauce which Corsair added but the MP500's performance pulled ahead of Patriot's Hellfire SSD overall.  Read the full review to see where the drive showed the most performance differential.

main.jpg

"Corsair is throwing its hat into the NVMe SSD ring with the Force Series MP500 drive. We subjected this gumstick to our testing gauntlet to see how well the 240GB version fares against the rest of the formidable NVMe field."

Here are some more Storage reviews from around the web:

Storage