Deathwish RAID racing; hit single channel DDR4 transfer rates with WD Black NVMe drives

Subject: Storage | June 19, 2018 - 04:13 PM |
Tagged: wd black nvme, RAID-0, raid, kingston, Hyper M.2 X16 Card, deathwish, ddr4-2400, asus

This will cost you a bit to set up but will provide you with almost unbelievable transfer rates.  Simply combine eight 1 TB WD Black NVMe SSDs at roughly $400 a pop with a pair of ASUS' Hyper M.2 expansion cards at $60 each and build up a deathwish RAID of doom!  TechARP just posted a look at how Andrew Vo managed to pull this off. 

As pointed out by several readers who ... well, actually watched the video instead of just reading the article ... this was done on Threadripper, which makes far more sense than a PCIe lane starved Intel system.   Ignore me and make your Threadripper roar.

Unfortunately this trick will not work the same on AMD platforms, it is limited to Intel Skylake or Coffee Lake with VROC support.  It will be interesting to see how a properly configured Threadripper system would compare.

WD-Black-NVMe-SSD-benchmark-results-02.jpg

"To hit 19 GB/s, you need to create a RAID 0 array of those eight 1 TB WD Black NVMe SSDs, but you can’t use the motherboard’s RAID feature because you would be limited by the 32 Gbps/4GB/s DMI bottleneck."

Here are some more Storage reviews from around the web:

Storage

 

Source: TechARP
Subject: Storage
Manufacturer: Toshiba

Toshiba RC100 240GB/480GB SSD Review

Introduction:

Budget SSDs are a tough trick to pull off. You have components, a PCB, and ultimately assembly - all things which costs money. Savings can be had when major components (flash) are sourced from within the same company, but there are several companies already playing that game. Another way to go is to reduce PCB size, but then you can only fit so much media on the same board as the controller and other necessary parts. Samsung attempted something like this with its PM971, but that part was never retail, meaning the cost savings were only passed to the OEMs implementing that part into their systems. It would be nice if a manufacturer would put a part like this into the hands of regular customers looking to upgrade their system on a budget, and Toshiba is aiming to do just that with their new RC100 line:

DSC04992.JPG

Not only did Toshiba stack the flash and controller within the same package, they also put that package on an M.2 2242 PCB. No need for additional length here really, and they could have possibly gotten away with M.2 2230, but that might have required some components on the back side of the PCB. Single-sided PCBs are cheaper to produce vs. a PCB that is 12mm longer, so the design decision makes sense here.

Specifications:

specs.png

Bear in mind these are budget parts and small ones at that. The specs are decent, but these are not meant to be fire-breathing SSDs. The PCIe 3.0 x2 interface will be limiting things a bit, and these are geared more towards power efficiency with a typical active power draw of only 3.2 Watts. While we were not sampled the 120GB part, it does appear to maintain decent specified performance despite the lower capacity, which is a testament to the performance of Toshiba's 64-layer 3D BiCS TLC flash.

Packaging:

DSC04989.JPG

Not much to talk about here. Simple, no frills, SSD packaging. Just enough to ensure the product arrives undamaged. Mission accomplished.

Read on for our full review of the Toshiba RC100 240GB and 480GB SSDs!

T'was a dark and StorMI knight

Subject: Storage | June 11, 2018 - 06:10 PM |
Tagged: amd, StorMI, tiered storage

AMD's Store Machine Intelligence Technology seeks to create a hybrid better than the sum of its parts, combining the low cost of cold spinning rust with the speed of hot flash based drives.  The implementation is not the same as Intel's SRT which treats your SSD as a cache to move frequently used files to the SSD but instead works like a tiered storage system.  That indicates entire files are moving from hot storage to cold storage as their usage patterns change and are not constantly being rebuilt. 

From the testing which [H]ard|OCP did, the machine intelligence part of StorMI lives up to its name, and the installation and configuration are very well done, to the point where they declare Intel's Rapid Storage Technology to be outclassed and should not even be considered as competition to AMD's storage stacking skills.

15273718665j23yohyqm_1_2_l.gif

"AMD’s StoreMI or (Store Machine Intelligence Technology) is storage performance enhancement technology, which can accelerate the responsiveness and the perceived speed of mechanical storage devices to SSD levels. This isn’t exactly a new concept, but AMD’s approach to this implementation is different than what we’ve seen in the past."

Here are some more Storage reviews from around the web:

Storage

Source: [H]ard|OCP
Author:
Subject: Storage
Manufacturer: Intel

A little Optane for your HDD

Intel's Optane Memory caching solution, launched in April of 2017, was a straightforward feature. On supported hardware platforms, consisting of 7th and 8th generation Core processor-based computers, users could add a 16 or 32gb Optane M.2 module to their PC and enable acceleration for their slower boot device (generally a hard drive). Beyond that, there weren't any additional options; you could only enable and disable the caching solution. 

However, users who were looking for more flexibility were out of luck. If you already had a fast boot device, such as an NVMe SSD, you had no use for these Optane Memory modules, even if you a slow hard drive in their system for mass storage uses that you wanted to speed up.

DSC04972.JPG

At GDC this year, Intel alongside the announcement of 64GB Optane Memory modules, announced that they are bringing support for secondary drive acceleration to the Optane Memory application.

Now that we've gotten our hands on this new 64GB module and the appropriate software, it's time to put it through its paces and see if it was worth the wait.

Performance

The full test setup is as follows:

Test System Setup
CPU

Intel Core i7-8700K

Motherboard Gigabyte H370 Aorus Gaming 3 
Memory

16GB Crucial DDR4-2666 (running at DDR4-2666)

Storage

Intel SSD Optane 800P 

Intel Optane Memory 64GB and 1TB Western Digital Black

Sound Card On-board
Graphics Card NVIDIA GeForce GTX 1080Ti 11GB
Graphics Drivers NVIDIA 397.93
Power Supply Corsair RM1000x
Operating System Windows 10 Pro x64 RS4

optanecache-5.png

In coming up with test scenarios to properly evaluate drive caching on a secondary, mass storage device, we had a few criteria. First, we were looking for scenarios that require lots of storage, meaning that they wouldn't fit on a smaller SSD. In addition to requiring a lot of storage, the applications must also rely on fast storage. 

Click here to continue reading our look at accelerating secondary drives with Optane

Author:
Subject: Storage
Manufacturer: ASUS

Is it a usable feature?

EDIT: We've received some clarification from Intel on this feature:

"The feature is actually apart of RST. While this is a CPU-attached storage feature, it is not VROC. VROC is a CPU-attached PCIe Storage component of the enterprise version of the product, Intel RSTe. VROC requires the new HW feature Intel Volume Management Device (Intel VMD) which is not available on the Z370 Chipset.

The Intel Rapid Storage Technology for CPU-attached Intel PCIe Storage feature is supported with select Intel chipsets and requires system manufacturer integration. Please contact the system manufacturer for a list of their supported platforms."

While this doesn't change how the feature works, or our testing, we wanted to clarify this point and have removed all references to VROC on Z370 in this review.

While updating our CPU testbeds for some upcoming testing, we came across an odd listing on the UEFI updates page for our ASUS ROG STRIX Z370-E motherboard.

asus-uefi.png

From the notes, it appeared that the release from late April of this year enables VROC for the Z370 platform. Taking a look at the rest of ASUS' Z370 lineup, it appears that all of its models received a similar UEFI update mentioning VROC. EDIT: As it turns out, while these patch notes call this feature "VROC", it is officially known as "Intel Rapid Storage Technology for CPU-attached Intel PCIe Storage " and slightly different than VROC on other Intel platforms.

While we are familiar with VROC as a CPU-attached RAID technology for NVMe devices on the Intel X299 and Xeon Scalable platforms, it has never been mentioned as an available option for the enthusiast grade Z-series chipsets. Could this be a preview of a feature that Intel has planned to come for the upcoming Z390 chipset?

Potential advantages of a CPU-attached RAID mode on the Z370 platform mostly revolve around throughput. While the chipset raid mode on the Z370 chipset will support three drives, the total throughput is limited to just under 4GB/s by the DMI 3.0 link between the processor and chipset.

Like we've seen AMD do on their X470 platform, CPU-attached RAID should scale as long as you have CPU-connected PCI-Express lanes available, and not being used by another device like a GPU or network card.

First, some limitations.

Primarily, it's difficult to connect multiple NVMe devices to the CPU rather than the chipset on most Z370 motherboards. Since the platform natively supports NVMe RAID through the Z370 chipset, all of the M.2 slots on our Strix Z370-E are wired to go through the chipset connection rather than directly to the CPU's PCIe lanes.

IMG_8161.jpg

To combat this, we turned to the ASUS Hyper M.2 X16 card, which utilizes PCIe bifurcation to enable usage of 4 M.2 devices via one PCI-E X16 slot. Luckily, ASUS has built support for bifurcation, and this Hyper M.2 card into the UEFI for the Strix Z370-E.

180523163122.png

Aiming to simplify the setup, we are using the integrated UHD 620 graphics of the i7-8700K, and running the Hyper M.2 card in the primary PCIe slot, usually occupied by a discrete GPU.

Continue reading our look at CPU-attached NVMe RAID on Z370 motherboards from ASUS!

Marvell Updates NVMe SSD Controllers - 8-Channel 88SS1100 and 4-Channel 88SS1084

Subject: Storage | June 7, 2018 - 06:08 AM |
Tagged: toggle NAND, ssd, PCIe 3.0 x4, ONFI, NVMe, Marvell, controller, 88SS1100, 88SS1084

We've seen faster and faster SSDs over the past decade, and while the current common interface is PCIe 3.0 x4, SSD controllers still have a hard time saturating the available bandwidth. This is due to other factors like power consumption constraints of the M.2 form factor as well as the controllers not being sufficiently optimized to handle IO requests at a consistently low latency. This means there is plenty of room for improvement, and with that, we have two new NVme SSD controllers out of Marvell:

block diagram.png

Above is the block diagram for the 88SS1100, an 8-Channel controller that promises higher performance over Marvell's previous parts. There is also a nearly identical 88SS1084, which drops to four physical channels but retains the same eight CE (chip enable) lines, meaning it can still talk to eight separate banks of flash, which should keep performance reasonable despite the halving of the physical channels available. Reducing channels to the flash helps save power and reduces the cost of the controller.

Marvell claims the new controller can reach 3.6GB/s throughput and 700,000 IOPS. Granted it would need to be mated to solid performing flash in order to reach those levels, that shouldn't be an issue as the new controllers increase compatibility with modern flash communication protocols (ONFi 4.0, Toggle 3.0, etc). Marvell's NANDEdge tech (their name for their NAND side interface) enters its fourth generation, promising compatibility with 96-layer and TLC / QLC flash.

specs.png

Specs for the 8-Channel 88SS1100. 88SS1084 is identical except the BGA package drops in size to 12mm x 13.5mm and only requires 418 balls.

Rounding out the specs are the staples expected in modern SSD controllers, like OTP / Secure Drive / AES hardware crypto support, and NVMe 1.3 compliance for the host end of the interface.

While the two new parts are 'available or purchase now', it will take a few months before we see them appear in purchasable products. We'll be keeping an eye out for appearances in future SSD launches!

Marvell's full press release appears after the break.

Source: Marvell

Computex 2018: Intel Announces 380GB Optane 905P in M.2 22110 Form Factor

Subject: Storage | June 6, 2018 - 03:55 AM |
Tagged: ssd, Optane Memory, Optane, M.2 22110, M.2, Intel, 905P, 3D XPoint

At Computex 2018, Intel announced a new Optane 905P SSD:

905P Rear.PNG

...the Optane 905P 380GB, now in an M.2 form factor!

905P Front.jpg

This looks to be a miniaturization of the 7-channel controller previously only available on the desktop add-in cards (note there are 7 packages). There is a catch though, as fitting 7 packages plus a relatively large controller means this is not M.2 2280, but M.2 22110. The M.2 22110 (110mm long) form factor may limit where you can install this product, as mobile platforms and some desktop motherboards only support up to an M.2 2280 (80mm) length. Power consumption may also be a concern for mobile applications, as this looks to be the full blown 7-channel controller present on the desktop AIC variants of the 905P and 900P.

We have no performance numbers just yet, but based on the above we should see figures in-line with the desktop Optane parts (and higher than the previous 'Optane Memory'/800P M.2 parts, which used a controller with fewer channels). Things may be slightly slower since this part would be limited to a ~7W power envelope - that is the maximum you can get out of an M.2 port without damaging the motherboard or overheating the smaller surface area of an M.2 form factor.

An interesting point to bring up is that while 3D XPoint does not need to be overprovisioned like NAND flash does, there is a need to have some spare area as well as space for the translation layer (used for wear leveling - still a requirement for 3D XPoint as it must be managed to some degree). In the past, we've noted that smaller capacities of a given line will see slightly less of a proportion of available space when comparing the raw media present to the available capacity. Let's see how this (theoretically) works out for the new 905P:

I'm making an educated guess that the new 380GB part contains 4 die stacks within its packages. We've never seen 8 die stacks come out of Intel, and there is little reason to believe any would be used in this product based on the available capacity. Note that higher capacities run at ~17% excess media, but as the capacity reduces, the percentage excess increases. The 280GB 900P increases to 20% by that capacity, but the new 905P M.2 comes in at 18%. Not much of a loss there, meaning the cost/GB *should* come in-line with the pricing of the 480GB 900P, which should put the 905P 380GB right at a $450-$500 price point.

The new 905P M.2 22110 is due out later this year.

Source: Intel

Intel Launches Optane DC Persistent Memory (DIMMs), Talks 20TB QLC SSDs

Subject: Storage | May 30, 2018 - 07:28 PM |
Tagged: ssd, QLC, Optane DC, Optane, Intel, DIMM, 3D XPoint, 20TB

Lots of good stuff coming out of Intel's press event earlier today. First up is Optane, now (finally and officially) in a DIMM form factor!:

Intel-Optane-Persistent-memory-1-.jpg

We have seen and tested Optane in several forms, but all so far have been bottlenecked by the interface and controller architectures. The only real way to fully realize the performance gains of 3D XPoint (how it works here) is to move away from the slower interfaces that are holding it back. A DIMM form factor is just the next logical step here.

filling-the-gaps-between-memory-and-storage-after.png

Intel shows the new 'Optane DC Persistent Memory' as yet another tier up the storage/memory stack. The new parts will be available in 128GB, 256GB, and 512GB capacities. We don't have confirmation on the raw capacity, but based on Intel's typical max stack height of 4 dies per package, 3D XPoint's raw die capacity of 16GB, and a suspected 10 packages per DIMM, that should come to 640GB raw capacity. Combined with a 60 DWPD rating (up from 30DWPD for P4800X), this shows Intel is loosening up their design margins considerably. This makes sense as 3D XPoint was a radically new and unproven media when first launched, and it has now built up a decent track record in the field.

gap-3.png

Bridging The Gap chart - part of a sequence from our first P4800X review.

Recall that even with Intel's Optane DC SSD parts like the P4800X, there remained a ~100x latency gap between the DRAM and the storage. The move to DIMMs should help Intel push closer to the '1000x faster than NAND' claims made way back when 3D XPoint was launched. Even if DIMMs were able to extract all possible physical latency gains from XPoint, there will still be limitations imposed by today's software architectures, which still hold many legacy throwbacks from the times of HDDs. Intel generally tries to help this along by providing various caching solutions that allow Optane to directly augment the OS's memory. These new DIMMs, when coupled with supporting enterprise platforms capable of logically segmenting RAM and NV DIMM slots, should be able to be accessed either directly or as a memory expansion tier.

Circling back to raw performance, we'll have to let software evolve a bit further to see even better gains out of XPoint platforms. That's likely the reason Intel did not discuss any latency figures for the new products today. My guess is that latencies should push down into the 1-3us range, splitting the difference between current generation DRAM (~80-100ns) and PCIe-based Optane parts (~10us). While the DIMM form factor is certainly faster, there is still a management layer at play here, meaning some form of controller or a software layer to handle wear leveling. No raw XPoint sitting on the memory bus just yet.

Also out of the event came talks about QLC NAND flash. Recently announced by Intel / Micron, along with 96-layer 3D NAND development, QLC helps squeeze higher capacities out of given NAND flash dies. Endurance does take a hit, but so long as the higher density media is coupled to appropriate client/enterprise workloads, there should be no issue with premature media wear-out or data retention. Micron has already launched an enterprise QLC part, and while Intel been hush-hush on actual product launches, they did talk about both client and enterprise QLC parts (with the latter pushing into 20TB in a 2.5" form factor).

Press blast for Optane DC Persistent Memory appears after the break (a nicer layout is available by clicking the source link).

Subject: Storage
Manufacturer: ADATA

Introduction, Specifications and Packaging

Introduction:

ADATA has a habit of occasionally coming out of the woodwork and dropping a great performing SSD on the market at a highly competitive price. A few of their recent SATA SSD launches were promising, but some were very difficult to find in online stores. This has improved more recently, and current ADATA products now enjoy relatively wide availability. We were way overdue for an ADATA review, and the XPG SX8200 is a great way for us to get back into covering this company's offerings:

180523-114049_2.jpg

For those unaware, XPG is a computing-related sub-brand of ADATA, and if you have a hard time finding details for these drives online, it is because you must look at their dedicated xpg.com domain. Parent brand ADATA has since branched into LED lighting and other industrial applications, such as solid-state drive motor controllers and the like. Some PC products bear the ADATA name, such as USB drives and external hard drives.

Ok, enough rambling about other stuff. Let's take a look at this XPG SX8200!

Specifications:

specs.png

Specs are mostly par for the course here, with a few notable exceptions. The SX8200 opts for a lower available capacity than you would typically see with a TLC SSD. That means a slight bump in OP, which helps nudge endurance higher due to that sacrifice. Another interesting point is that they have simply based their specs of 'up to 3200 MB/s read / 1700 MB/s write' from direct measurements of common benchmarking software. While the tests they used are 'short-run' benchmarks that will remain within the SLC cache of these SSDs, I do applaud ADATA for their openness here.

Packaging:

180523-114049.jpg

Straightforward packaging with a small bonus inside - in the form of a thermal adhesive-backed aluminum heat spreader. This is included as an option since some folks may have motherboards with integrated heat spreading M.2 socket covers or laptops with extremely tight clearances, and the added thickness may not play nicely in those situations.

Read on for our full review of the ADATA XPG SX8200 M.2 NVMe SSD!

Some Dell Systems Shipping with 24GB of memory: 8GB DDR4 and 16GB Optane Memory

Subject: Memory, Storage | May 29, 2018 - 04:10 PM |
Tagged: Optane, Intel, g3, dell

Recently I came across an interesting product listing on Dell’s website for its new G3 15” gaming notebook. These are budget-friendly gaming systems with mainstream discrete GeForce graphics cards in them like the GTX 1050 and GTX 1050 Ti. Starting at just $699 they offer a compelling balance of performance and value, though we haven’t yet gotten hands on one for testing.

One tidbit that seemed off to me was this:

dell1.png

Several of these systems list 24GB of memory through a combination of 8GB of DDR4 and 16GB of Optane Memory for caching. A similar wording exists in the configuration page for these machines:

dell2.png

Clicking on the More Info link takes you to the “Help Me Choose” portion of the page that details what system memory does, how it helps the performance of your machine, and how Optane comes into the mix. There is important wording to point out that Dell provides (emphasis mine):

Some systems allow you to add Intel® Optane™ memory, which is a system acceleration solution for the 7th Gen and 8th Gen Intel® Core™ processor platforms. This solution comes in a module format and by placing this new memory media between the processor and a slower SATA-based storage devices ( HDD, SSHD or SATA SSD), you are able to store commonly used data and programs closer to the processor, allowing the systems to access this information more quickly and improve overall system performance.

Mixing DRAM with Intel® Optane™ delivers better performance and cost. For example, 4 GB DRAM + 16GB Intel® Optane™ memory delivers better performance and cost than just 8GB DRAM.

What is the difference between Intel® Optane™ memory and DRAM? Does it replace DRAM?
The Intel® Optane™ memory module does not replace DRAM. It can be, however, added to DRAM to increase systems performance.

If I use Intel® Optane™ memory with an HDD to accelerate my games, game launches and level loads become faster and close to that of an SSD experience, but what about the game play? Is the game play impacted?
Game play will not be that different between an SSD and an HDD based systems since the games in loaded into DRAM during play.

While my initial reaction of this as a clever way to trick consumers into thinking they are getting 24GB of memory in their PC when in reality it is only 8GB holds true, there are a lot of interesting angles to take.

First, yes, I believe it is a poor decision to incorporate Optane Memory into the specification of “memory” in these PCs. Optane Memory is an accelerant for system storage, and cannot replace DRAM (as the FAQ on Dell’s website states). If you have 8GB of memory, and your application workload fills that, having 16GB of memory would be a tremendous improvement in performance. Having 16GB of Optane caching on your system will only aid in moving and swapping data from main storage INTO that 8GB pool of physical memory.

Where Dell’s statements hold true though is in instances where memory capacity is not the bottleneck of performance, and your system has a standard spinning hard drive rather than an SSD installed. Optane Memory and its caching capabilities will indeed improve performance more than doubling the main system memory in instances where memory is not the limiter.

I do hope that Dell isn’t choosing to remove SSD options or defaults from these notebooks in order to maintain that performance claim; but based on my quick check, any notebook configuration that has the “24GB of memory” claim to it does NOT offer an SSD upgrade path.

Though it isn't called out one way or the other in the Dell specifications, my expectation is that they are NOT configuring these systems to use the Optane Memory as a part of the Windows page file, which MIGHT show some interesting benefits in regards to lower system memory capacity. Instead, these are likely configured with Optane Memory as a cache for the 1TB hard drive that is also a required piece of the configuration. If I'm incorrect, this config will definitely warrant some more testing and research.

Where the argument might shift is in the idea of performance per dollar improvements to overall system responsiveness. As the cost of DDR4 memory has risen, 16GB of Optane Memory (at around $25) is well below the cost of a single 8GB SO-DIMM for these notebooks (in the $80-90 range), giving OEMs a significant pricing advantage towards their bottom line. And yes, we have proven that Optane Memory works well and accelerates application load times and even level loads in some games.

But will it allow you to run more applications or games that might need or want more than 8GB of system memory? No.

Ideally, these configurations would include both 16GB of DDR4 system memory AND the 16GB of Optane Memory to get the best possible performance. But as system vendors and Intel itself look for ways to differentiate a product stack, while keeping prices lower and margins higher, this is one of the more aggressive tactics we have seen.

I’m curious what Dell’s input on this will be, if this is a direction they plan on continuing or one that they are simply trialing. Will other OEMs follow suit? Hopefully I’ll be able to get some interesting answers this week and during Computex early next month.

For now, it is something that potential buyers of these systems should pay attention to and make sure they are properly informed as to the hardware configuration capabilities and limits.

Source: Dell