Bring your own cache to Toshiba's RC100 Host Memory Buffet

Subject: Storage | July 13, 2018 - 03:57 PM |
Tagged: toshiba, RC100, NVMe, M.2, M.2 2242

The wee M.2 2242 form factor of the RC100 means there is no space for a DRAM buffer, which led Toshiba to utilize the Host Memory Buffer feature included in NVMe revision 1.2.  In order to use this feature you must be running Windows 10 Fall Creators Update (or 1709) or the at least the 4.14 Linux kernel.  It commandeers a portion of your system RAM to act as the cache, somewhat less effective than having it on board as The Tech Report's testing shows.  As well it is hampered its PCIe 2x interface, which ensures it falls behind 4x NVMe drives. 

The testing reveal the weaknesses of this design, but it is an interesting implementation of an NVMe featuer not often seen, which is in itself worth taking a look at.

hero.jpg

"Toshiba's RC100 NVMe SSD takes a bold stab at life without DRAM or a full four lanes of PCIe connectivity. Unlike many DRAM-less SSDs, however, the RC100 has a trick up its sleeve with the NVMe protocol's Host Memory Buffer caching feature. Join us to find out whether NVMe and HMB can bolster this entry-level SSD's performance."

Here are some more Storage reviews from around the web:

Storage

 

That didn't take long, RGB SSDs from Team Group

Subject: Storage | July 5, 2018 - 02:18 PM |
Tagged: ssd, sata, RGB, team group, delta rgb

Team Group have hit peak RGB with their new Delta SSDs which does not only have a full blown case of RGBs but is compatible with ASUS Aura Sync, MSI Mystic Light, Gigabyte RGB Fusion and other fancy software to control your blinken lighten.  In theory it should also offer performance that saturates SATA 6Gbps bandwidth, but who cares about that when you can get even more lumens shoved into your PC!  For about $80 you can pick one up, but with this drive you should be going with at least a RAID 5 setup.

Join TechPowerUp and bask in the glow.

rgb3.jpg

"Team Group's Delta RGB SSD is a unique solid-state drive, due to its amazing RGB support. It connects to your motherboard's RGB header, which then gives you full control over the LEDs, for mixed colors, patterns and custom lighting effects. Performance is good too, so is pricing, with just $80 for the 250 GB version."

Here are some more Storage reviews from around the web:

Storage

 

Source: TechPowerUp

Biostar Announces Budget M500 NVMe SSDs

Subject: Storage | June 29, 2018 - 06:14 PM |
Tagged: tlc, ssd, NVMe, biostar, 3d nand

Motherboard manufacturer Biostar is expanding its solid state drive lineup with the launch of the M500 M.2 2280 SSD which appears to be the company’s first PCI-E NVMe SSD (it is not the first M.2 but those drives used SATA). The new Biostar M500 SSD uses 3D TLC NAND flash and supports NVMe 1.2 protocol and the PCI-E x2 interface. The exact controller and flash chips used have not yet been revealed, however.

Biostar M500 PCI-E NVMe SSD.jpg

Biostar continues its gamer / racing aesthetics with the new drive featuring a black heatsink with two LEDs that serve a utilitarian purpose. One LED shows the temperature of thebdrive at a glance (red/yellow/green) while the other LED shows data transmit activity and also shows which PCi-E mode (2.0 / 3.0) the drive is in.

The M500 SSD uses up to 1.7W while reading. it comes in four SKUs including 128 GB, 256 GB, 512 GB, and 1TB capacities with either 256 MB. 512 MB, or 1 GB of DDR3L cache respectively.

As far as performance is concerned, Biostar claims up to 1,700 MB/s sequential reads and 1,100 MB/s sequential writes. Further, the drives offer up to 200K random read IOPS and 180K random write IOPS. Of course, these numbers are for the top end 512 GB and 1 TB drives and the lower capacity models will have less performance as they have less cache and flash channels to spread reads and writes from/to.

SSD Capacity Max Sequential Read Max Sequential Write Read IOPS Write IOPS Price
128 GB 1,500 MB/s 550 MB/s 200K 180K $59
256 GB 1,600 MB/s 900 MB/s 200K 180K $99
512 GB 1,700 MB/s 1,100 MB/s 200K 180K $149
1 TB 1,700 MB/s 1,100 MB/s 200K 180K $269

According to Guru3D, Biostar’s M500 M.2 drives will be available soon with MSRP prices of $59 for the 128 GB model, $99 for the 256 GB model, $149 for the 512 GB drive, and $269 for the 1 TB SKU. The pricing does not seem terrible though the x2 interface does limit its potential / usefulness. They are squarely budget SSDs aimed at computing with SATA SSDs and enticing upgrades from mechanical drives. They may be useful for upgrading older laptops where a x4 M.2 slot would not be wasted like on a desktop machine.

What do you think about Biostar’s foray into NVMe solid state drives?

Source: Guru3D

Deathwish RAID racing; hit single channel DDR4 transfer rates with WD Black NVMe drives

Subject: Storage | June 19, 2018 - 04:13 PM |
Tagged: wd black nvme, RAID-0, raid, kingston, Hyper M.2 X16 Card, deathwish, ddr4-2400, asus

This will cost you a bit to set up but will provide you with almost unbelievable transfer rates.  Simply combine eight 1 TB WD Black NVMe SSDs at roughly $400 a pop with a pair of ASUS' Hyper M.2 expansion cards at $60 each and build up a deathwish RAID of doom!  TechARP just posted a look at how Andrew Vo managed to pull this off. 

As pointed out by several readers who ... well, actually watched the video instead of just reading the article ... this was done on Threadripper, which makes far more sense than a PCIe lane starved Intel system.   Ignore me and make your Threadripper roar.

Unfortunately this trick will not work the same on AMD platforms, it is limited to Intel Skylake or Coffee Lake with VROC support.  It will be interesting to see how a properly configured Threadripper system would compare.

WD-Black-NVMe-SSD-benchmark-results-02.jpg

"To hit 19 GB/s, you need to create a RAID 0 array of those eight 1 TB WD Black NVMe SSDs, but you can’t use the motherboard’s RAID feature because you would be limited by the 32 Gbps/4GB/s DMI bottleneck."

Here are some more Storage reviews from around the web:

Storage

 

Source: TechARP
Subject: Storage
Manufacturer: Toshiba

Toshiba RC100 240GB/480GB SSD Review

Introduction:

Budget SSDs are a tough trick to pull off. You have components, a PCB, and ultimately assembly - all things which costs money. Savings can be had when major components (flash) are sourced from within the same company, but there are several companies already playing that game. Another way to go is to reduce PCB size, but then you can only fit so much media on the same board as the controller and other necessary parts. Samsung attempted something like this with its PM971, but that part was never retail, meaning the cost savings were only passed to the OEMs implementing that part into their systems. It would be nice if a manufacturer would put a part like this into the hands of regular customers looking to upgrade their system on a budget, and Toshiba is aiming to do just that with their new RC100 line:

DSC04992.JPG

Not only did Toshiba stack the flash and controller within the same package, they also put that package on an M.2 2242 PCB. No need for additional length here really, and they could have possibly gotten away with M.2 2230, but that might have required some components on the back side of the PCB. Single-sided PCBs are cheaper to produce vs. a PCB that is 12mm longer, so the design decision makes sense here.

Specifications:

specs.png

Bear in mind these are budget parts and small ones at that. The specs are decent, but these are not meant to be fire-breathing SSDs. The PCIe 3.0 x2 interface will be limiting things a bit, and these are geared more towards power efficiency with a typical active power draw of only 3.2 Watts. While we were not sampled the 120GB part, it does appear to maintain decent specified performance despite the lower capacity, which is a testament to the performance of Toshiba's 64-layer 3D BiCS TLC flash.

Packaging:

DSC04989.JPG

Not much to talk about here. Simple, no frills, SSD packaging. Just enough to ensure the product arrives undamaged. Mission accomplished.

Read on for our full review of the Toshiba RC100 240GB and 480GB SSDs!

T'was a dark and StorMI knight

Subject: Storage | June 11, 2018 - 06:10 PM |
Tagged: amd, StorMI, tiered storage

AMD's Store Machine Intelligence Technology seeks to create a hybrid better than the sum of its parts, combining the low cost of cold spinning rust with the speed of hot flash based drives.  The implementation is not the same as Intel's SRT which treats your SSD as a cache to move frequently used files to the SSD but instead works like a tiered storage system.  That indicates entire files are moving from hot storage to cold storage as their usage patterns change and are not constantly being rebuilt. 

From the testing which [H]ard|OCP did, the machine intelligence part of StorMI lives up to its name, and the installation and configuration are very well done, to the point where they declare Intel's Rapid Storage Technology to be outclassed and should not even be considered as competition to AMD's storage stacking skills.

15273718665j23yohyqm_1_2_l.gif

"AMD’s StoreMI or (Store Machine Intelligence Technology) is storage performance enhancement technology, which can accelerate the responsiveness and the perceived speed of mechanical storage devices to SSD levels. This isn’t exactly a new concept, but AMD’s approach to this implementation is different than what we’ve seen in the past."

Here are some more Storage reviews from around the web:

Storage

Source: [H]ard|OCP
Author:
Subject: Storage
Manufacturer: Intel

A little Optane for your HDD

Intel's Optane Memory caching solution, launched in April of 2017, was a straightforward feature. On supported hardware platforms, consisting of 7th and 8th generation Core processor-based computers, users could add a 16 or 32gb Optane M.2 module to their PC and enable acceleration for their slower boot device (generally a hard drive). Beyond that, there weren't any additional options; you could only enable and disable the caching solution. 

However, users who were looking for more flexibility were out of luck. If you already had a fast boot device, such as an NVMe SSD, you had no use for these Optane Memory modules, even if you a slow hard drive in their system for mass storage uses that you wanted to speed up.

DSC04972.JPG

At GDC this year, Intel alongside the announcement of 64GB Optane Memory modules, announced that they are bringing support for secondary drive acceleration to the Optane Memory application.

Now that we've gotten our hands on this new 64GB module and the appropriate software, it's time to put it through its paces and see if it was worth the wait.

Performance

The full test setup is as follows:

Test System Setup
CPU

Intel Core i7-8700K

Motherboard Gigabyte H370 Aorus Gaming 3 
Memory

16GB Crucial DDR4-2666 (running at DDR4-2666)

Storage

Intel SSD Optane 800P 

Intel Optane Memory 64GB and 1TB Western Digital Black

Sound Card On-board
Graphics Card NVIDIA GeForce GTX 1080Ti 11GB
Graphics Drivers NVIDIA 397.93
Power Supply Corsair RM1000x
Operating System Windows 10 Pro x64 RS4

optanecache-5.png

In coming up with test scenarios to properly evaluate drive caching on a secondary, mass storage device, we had a few criteria. First, we were looking for scenarios that require lots of storage, meaning that they wouldn't fit on a smaller SSD. In addition to requiring a lot of storage, the applications must also rely on fast storage. 

Click here to continue reading our look at accelerating secondary drives with Optane

Author:
Subject: Storage
Manufacturer: ASUS

Is it a usable feature?

EDIT: We've received some clarification from Intel on this feature:

"The feature is actually apart of RST. While this is a CPU-attached storage feature, it is not VROC. VROC is a CPU-attached PCIe Storage component of the enterprise version of the product, Intel RSTe. VROC requires the new HW feature Intel Volume Management Device (Intel VMD) which is not available on the Z370 Chipset.

The Intel Rapid Storage Technology for CPU-attached Intel PCIe Storage feature is supported with select Intel chipsets and requires system manufacturer integration. Please contact the system manufacturer for a list of their supported platforms."

While this doesn't change how the feature works, or our testing, we wanted to clarify this point and have removed all references to VROC on Z370 in this review.

While updating our CPU testbeds for some upcoming testing, we came across an odd listing on the UEFI updates page for our ASUS ROG STRIX Z370-E motherboard.

asus-uefi.png

From the notes, it appeared that the release from late April of this year enables VROC for the Z370 platform. Taking a look at the rest of ASUS' Z370 lineup, it appears that all of its models received a similar UEFI update mentioning VROC. EDIT: As it turns out, while these patch notes call this feature "VROC", it is officially known as "Intel Rapid Storage Technology for CPU-attached Intel PCIe Storage " and slightly different than VROC on other Intel platforms.

While we are familiar with VROC as a CPU-attached RAID technology for NVMe devices on the Intel X299 and Xeon Scalable platforms, it has never been mentioned as an available option for the enthusiast grade Z-series chipsets. Could this be a preview of a feature that Intel has planned to come for the upcoming Z390 chipset?

Potential advantages of a CPU-attached RAID mode on the Z370 platform mostly revolve around throughput. While the chipset raid mode on the Z370 chipset will support three drives, the total throughput is limited to just under 4GB/s by the DMI 3.0 link between the processor and chipset.

Like we've seen AMD do on their X470 platform, CPU-attached RAID should scale as long as you have CPU-connected PCI-Express lanes available, and not being used by another device like a GPU or network card.

First, some limitations.

Primarily, it's difficult to connect multiple NVMe devices to the CPU rather than the chipset on most Z370 motherboards. Since the platform natively supports NVMe RAID through the Z370 chipset, all of the M.2 slots on our Strix Z370-E are wired to go through the chipset connection rather than directly to the CPU's PCIe lanes.

IMG_8161.jpg

To combat this, we turned to the ASUS Hyper M.2 X16 card, which utilizes PCIe bifurcation to enable usage of 4 M.2 devices via one PCI-E X16 slot. Luckily, ASUS has built support for bifurcation, and this Hyper M.2 card into the UEFI for the Strix Z370-E.

180523163122.png

Aiming to simplify the setup, we are using the integrated UHD 620 graphics of the i7-8700K, and running the Hyper M.2 card in the primary PCIe slot, usually occupied by a discrete GPU.

Continue reading our look at CPU-attached NVMe RAID on Z370 motherboards from ASUS!

Marvell Updates NVMe SSD Controllers - 8-Channel 88SS1100 and 4-Channel 88SS1084

Subject: Storage | June 7, 2018 - 06:08 AM |
Tagged: toggle NAND, ssd, PCIe 3.0 x4, ONFI, NVMe, Marvell, controller, 88SS1100, 88SS1084

We've seen faster and faster SSDs over the past decade, and while the current common interface is PCIe 3.0 x4, SSD controllers still have a hard time saturating the available bandwidth. This is due to other factors like power consumption constraints of the M.2 form factor as well as the controllers not being sufficiently optimized to handle IO requests at a consistently low latency. This means there is plenty of room for improvement, and with that, we have two new NVme SSD controllers out of Marvell:

block diagram.png

Above is the block diagram for the 88SS1100, an 8-Channel controller that promises higher performance over Marvell's previous parts. There is also a nearly identical 88SS1084, which drops to four physical channels but retains the same eight CE (chip enable) lines, meaning it can still talk to eight separate banks of flash, which should keep performance reasonable despite the halving of the physical channels available. Reducing channels to the flash helps save power and reduces the cost of the controller.

Marvell claims the new controller can reach 3.6GB/s throughput and 700,000 IOPS. Granted it would need to be mated to solid performing flash in order to reach those levels, that shouldn't be an issue as the new controllers increase compatibility with modern flash communication protocols (ONFi 4.0, Toggle 3.0, etc). Marvell's NANDEdge tech (their name for their NAND side interface) enters its fourth generation, promising compatibility with 96-layer and TLC / QLC flash.

specs.png

Specs for the 8-Channel 88SS1100. 88SS1084 is identical except the BGA package drops in size to 12mm x 13.5mm and only requires 418 balls.

Rounding out the specs are the staples expected in modern SSD controllers, like OTP / Secure Drive / AES hardware crypto support, and NVMe 1.3 compliance for the host end of the interface.

While the two new parts are 'available or purchase now', it will take a few months before we see them appear in purchasable products. We'll be keeping an eye out for appearances in future SSD launches!

Marvell's full press release appears after the break.

Source: Marvell

Computex 2018: Intel Announces 380GB Optane 905P in M.2 22110 Form Factor

Subject: Storage | June 6, 2018 - 03:55 AM |
Tagged: ssd, Optane Memory, Optane, M.2 22110, M.2, Intel, 905P, 3D XPoint

At Computex 2018, Intel announced a new Optane 905P SSD:

905P Rear.PNG

...the Optane 905P 380GB, now in an M.2 form factor!

905P Front.jpg

This looks to be a miniaturization of the 7-channel controller previously only available on the desktop add-in cards (note there are 7 packages). There is a catch though, as fitting 7 packages plus a relatively large controller means this is not M.2 2280, but M.2 22110. The M.2 22110 (110mm long) form factor may limit where you can install this product, as mobile platforms and some desktop motherboards only support up to an M.2 2280 (80mm) length. Power consumption may also be a concern for mobile applications, as this looks to be the full blown 7-channel controller present on the desktop AIC variants of the 905P and 900P.

We have no performance numbers just yet, but based on the above we should see figures in-line with the desktop Optane parts (and higher than the previous 'Optane Memory'/800P M.2 parts, which used a controller with fewer channels). Things may be slightly slower since this part would be limited to a ~7W power envelope - that is the maximum you can get out of an M.2 port without damaging the motherboard or overheating the smaller surface area of an M.2 form factor.

An interesting point to bring up is that while 3D XPoint does not need to be overprovisioned like NAND flash does, there is a need to have some spare area as well as space for the translation layer (used for wear leveling - still a requirement for 3D XPoint as it must be managed to some degree). In the past, we've noted that smaller capacities of a given line will see slightly less of a proportion of available space when comparing the raw media present to the available capacity. Let's see how this (theoretically) works out for the new 905P:

I'm making an educated guess that the new 380GB part contains 4 die stacks within its packages. We've never seen 8 die stacks come out of Intel, and there is little reason to believe any would be used in this product based on the available capacity. Note that higher capacities run at ~17% excess media, but as the capacity reduces, the percentage excess increases. The 280GB 900P increases to 20% by that capacity, but the new 905P M.2 comes in at 18%. Not much of a loss there, meaning the cost/GB *should* come in-line with the pricing of the 480GB 900P, which should put the 905P 380GB right at a $450-$500 price point.

The new 905P M.2 22110 is due out later this year.

Source: Intel