Subject: Storage | July 13, 2018 - 03:57 PM | Jeremy Hellstrom
Tagged: toshiba, RC100, NVMe, M.2, M.2 2242
The wee M.2 2242 form factor of the RC100 means there is no space for a DRAM buffer, which led Toshiba to utilize the Host Memory Buffer feature included in NVMe revision 1.2. In order to use this feature you must be running Windows 10 Fall Creators Update (or 1709) or the at least the 4.14 Linux kernel. It commandeers a portion of your system RAM to act as the cache, somewhat less effective than having it on board as The Tech Report's testing shows. As well it is hampered its PCIe 2x interface, which ensures it falls behind 4x NVMe drives.
The testing reveal the weaknesses of this design, but it is an interesting implementation of an NVMe featuer not often seen, which is in itself worth taking a look at.
"Toshiba's RC100 NVMe SSD takes a bold stab at life without DRAM or a full four lanes of PCIe connectivity. Unlike many DRAM-less SSDs, however, the RC100 has a trick up its sleeve with the NVMe protocol's Host Memory Buffer caching feature. Join us to find out whether NVMe and HMB can bolster this entry-level SSD's performance."
Here are some more Storage reviews from around the web:
- ADATA XPG SX8200 NVMe SSD @ Modders-Inc
- TeamGroup T-Force Delta RGB 250GB SSD @ Guru of 3D
- NVMe SSD Roundup 2018: Intel Optane, WD Black and Samsung 970 Evo/Pro @ Techspot
- Samsung SSD 860 EVO 1TB @ Benchmark Reviews
- HP Portable SSD P800 @ Benchmark Reviews
Motherboard manufacturer Biostar is expanding its solid state drive lineup with the launch of the M500 M.2 2280 SSD which appears to be the company’s first PCI-E NVMe SSD (it is not the first M.2 but those drives used SATA). The new Biostar M500 SSD uses 3D TLC NAND flash and supports NVMe 1.2 protocol and the PCI-E x2 interface. The exact controller and flash chips used have not yet been revealed, however.
Biostar continues its gamer / racing aesthetics with the new drive featuring a black heatsink with two LEDs that serve a utilitarian purpose. One LED shows the temperature of thebdrive at a glance (red/yellow/green) while the other LED shows data transmit activity and also shows which PCi-E mode (2.0 / 3.0) the drive is in.
The M500 SSD uses up to 1.7W while reading. it comes in four SKUs including 128 GB, 256 GB, 512 GB, and 1TB capacities with either 256 MB. 512 MB, or 1 GB of DDR3L cache respectively.
As far as performance is concerned, Biostar claims up to 1,700 MB/s sequential reads and 1,100 MB/s sequential writes. Further, the drives offer up to 200K random read IOPS and 180K random write IOPS. Of course, these numbers are for the top end 512 GB and 1 TB drives and the lower capacity models will have less performance as they have less cache and flash channels to spread reads and writes from/to.
|SSD Capacity||Max Sequential Read||Max Sequential Write||Read IOPS||Write IOPS||Price|
|128 GB||1,500 MB/s||550 MB/s||200K||180K||$59|
|256 GB||1,600 MB/s||900 MB/s||200K||180K||$99|
|512 GB||1,700 MB/s||1,100 MB/s||200K||180K||$149|
|1 TB||1,700 MB/s||1,100 MB/s||200K||180K||$269|
According to Guru3D, Biostar’s M500 M.2 drives will be available soon with MSRP prices of $59 for the 128 GB model, $99 for the 256 GB model, $149 for the 512 GB drive, and $269 for the 1 TB SKU. The pricing does not seem terrible though the x2 interface does limit its potential / usefulness. They are squarely budget SSDs aimed at computing with SATA SSDs and enticing upgrades from mechanical drives. They may be useful for upgrading older laptops where a x4 M.2 slot would not be wasted like on a desktop machine.
What do you think about Biostar’s foray into NVMe solid state drives?
Toshiba RC100 240GB/480GB SSD Review
Budget SSDs are a tough trick to pull off. You have components, a PCB, and ultimately assembly - all things which costs money. Savings can be had when major components (flash) are sourced from within the same company, but there are several companies already playing that game. Another way to go is to reduce PCB size, but then you can only fit so much media on the same board as the controller and other necessary parts. Samsung attempted something like this with its PM971, but that part was never retail, meaning the cost savings were only passed to the OEMs implementing that part into their systems. It would be nice if a manufacturer would put a part like this into the hands of regular customers looking to upgrade their system on a budget, and Toshiba is aiming to do just that with their new RC100 line:
Not only did Toshiba stack the flash and controller within the same package, they also put that package on an M.2 2242 PCB. No need for additional length here really, and they could have possibly gotten away with M.2 2230, but that might have required some components on the back side of the PCB. Single-sided PCBs are cheaper to produce vs. a PCB that is 12mm longer, so the design decision makes sense here.
Bear in mind these are budget parts and small ones at that. The specs are decent, but these are not meant to be fire-breathing SSDs. The PCIe 3.0 x2 interface will be limiting things a bit, and these are geared more towards power efficiency with a typical active power draw of only 3.2 Watts. While we were not sampled the 120GB part, it does appear to maintain decent specified performance despite the lower capacity, which is a testament to the performance of Toshiba's 64-layer 3D BiCS TLC flash.
Not much to talk about here. Simple, no frills, SSD packaging. Just enough to ensure the product arrives undamaged. Mission accomplished.
Subject: Storage | June 7, 2018 - 06:08 AM | Allyn Malventano
Tagged: toggle NAND, ssd, PCIe 3.0 x4, ONFI, NVMe, Marvell, controller, 88SS1100, 88SS1084
We've seen faster and faster SSDs over the past decade, and while the current common interface is PCIe 3.0 x4, SSD controllers still have a hard time saturating the available bandwidth. This is due to other factors like power consumption constraints of the M.2 form factor as well as the controllers not being sufficiently optimized to handle IO requests at a consistently low latency. This means there is plenty of room for improvement, and with that, we have two new NVme SSD controllers out of Marvell:
Above is the block diagram for the 88SS1100, an 8-Channel controller that promises higher performance over Marvell's previous parts. There is also a nearly identical 88SS1084, which drops to four physical channels but retains the same eight CE (chip enable) lines, meaning it can still talk to eight separate banks of flash, which should keep performance reasonable despite the halving of the physical channels available. Reducing channels to the flash helps save power and reduces the cost of the controller.
Marvell claims the new controller can reach 3.6GB/s throughput and 700,000 IOPS. Granted it would need to be mated to solid performing flash in order to reach those levels, that shouldn't be an issue as the new controllers increase compatibility with modern flash communication protocols (ONFi 4.0, Toggle 3.0, etc). Marvell's NANDEdge tech (their name for their NAND side interface) enters its fourth generation, promising compatibility with 96-layer and TLC / QLC flash.
Specs for the 8-Channel 88SS1100. 88SS1084 is identical except the BGA package drops in size to 12mm x 13.5mm and only requires 418 balls.
Rounding out the specs are the staples expected in modern SSD controllers, like OTP / Secure Drive / AES hardware crypto support, and NVMe 1.3 compliance for the host end of the interface.
While the two new parts are 'available or purchase now', it will take a few months before we see them appear in purchasable products. We'll be keeping an eye out for appearances in future SSD launches!
Introduction, Specifications and Packaging
ADATA has a habit of occasionally coming out of the woodwork and dropping a great performing SSD on the market at a highly competitive price. A few of their recent SATA SSD launches were promising, but some were very difficult to find in online stores. This has improved more recently, and current ADATA products now enjoy relatively wide availability. We were way overdue for an ADATA review, and the XPG SX8200 is a great way for us to get back into covering this company's offerings:
For those unaware, XPG is a computing-related sub-brand of ADATA, and if you have a hard time finding details for these drives online, it is because you must look at their dedicated xpg.com domain. Parent brand ADATA has since branched into LED lighting and other industrial applications, such as solid-state drive motor controllers and the like. Some PC products bear the ADATA name, such as USB drives and external hard drives.
Ok, enough rambling about other stuff. Let's take a look at this XPG SX8200!
Specs are mostly par for the course here, with a few notable exceptions. The SX8200 opts for a lower available capacity than you would typically see with a TLC SSD. That means a slight bump in OP, which helps nudge endurance higher due to that sacrifice. Another interesting point is that they have simply based their specs of 'up to 3200 MB/s read / 1700 MB/s write' from direct measurements of common benchmarking software. While the tests they used are 'short-run' benchmarks that will remain within the SLC cache of these SSDs, I do applaud ADATA for their openness here.
Straightforward packaging with a small bonus inside - in the form of a thermal adhesive-backed aluminum heat spreader. This is included as an option since some folks may have motherboards with integrated heat spreading M.2 socket covers or laptops with extremely tight clearances, and the added thickness may not play nicely in those situations.
Subject: Storage | May 25, 2018 - 02:47 PM | Jeremy Hellstrom
Tagged: XPG SX8200, SM2262, NVMe, M.2, adta, 480GB
ADATA's XPG SX8200 uses the Silicon Motion SM2262 controller found in recent Intel and Mushkin M.2 SSDs, so we have an idea of its capabilities in conjunction with Micron's 64-layer 3D TLC NAND. In The Tech Reports real world testing this drive beat out Intel's 760p by a small margin in both reads and writes and it is slightly cheaper to pick up. It didn't come out as the fastest drive they've tested but it does show up near the top.
If you aren't quite sure if this drive is for you, just wait a wee bit as Al has it strapped down on his test bench right now *Allyn EDIT* our review is now live!.
"Adata's got a half-dozen NVMe M.2 drives available across its entire lineup, but its latest—the XPG SX8200—promises to dazzle with Micron's newest-gen 3D TLC and a Silicon Motion SM2262 controller. We break down the XPG SX8200 to find out if it's as good as the top dogs in the market."
Here are some more Storage reviews from around the web:
- Kingston A1000 480 GB @ TechPowerUp
- Crucial MX500 1TB M.2. @ Guru of 3D
- Crucial MX500 M.2 1 TB @ TechPowerUp
- AMD StoreMI Technology Review @ Neoseeker
- Silicon Power Bolt B80 240GB USB 3.1 Gen 2 Portable SSD Review @ NikKTech
- Netstor NA611TB3 Thunderbolt 3 External Drive Enclosure @ Kitguru
Subject: Storage | May 21, 2018 - 04:31 PM | Allyn Malventano
Tagged: ssd, QLC, NVMe, nand, Intel, Floating Gate, flash, die, 1Tbit
In tandem with Micron's launch of their new enterprise QLC SSDs, there is a broader technology announcement coming out of Intel today. This release covers the fact that Intel and Micron have jointly developed shippable 64-Layer 3D QLC NAND.
IMFT's 3D NAND announcement came back in early 2015, and Intel/Micron Flash Technologies have been pushing their floating gate technology further and further. Not only do we have the QLC announcement today, but with it came talks of progress on 96-layer development as well. Combining QLC with 96-Layer would yield a single die capacity of 1.5 Tbit (192GB), up from the 1 Tbit (128GB) capacity of the 64-Layer QLC die that is now in production.
This new flash won't be meant for power users, but should be completely usable in a general use client SSD, provided there is a bit of SLC (or 3D XPoint???) cache on the front end. QLC does store 33% more data per the same die space, which should eventually translate to a lower $/GB once development costs have been recouped. Here's hoping for lower cost SSDs in the future!
Subject: General Tech | May 11, 2018 - 02:07 PM | Jeremy Hellstrom
Tagged: SM2260, ssd, pcie, NVMe, M.2 2280, M.2, Intel, 600p
Intel's 600p was on our review bench almost two years ago and offered a relatively inexpensive entry into NVMe drives. It turns out that the Silicon Motion controller Intel used may have been a bit too proprietary as the Win10 April Update is not compatible with it. According to The Register this is a known incompatibility caused by a fix to resolve previous issues with Samsung made NVMe SSDs. They are working on a solution, with no release date announced as of yet.
"The issue is an unspecified "known incompatibility" between the operating system and the SSDs, which were launched in 2016. Both the 600p and Pro 6000p SSDs share the same SM2260 chipset and feature a PCIe NVMe 3.0 x4 interface."
Here is some more Tech News from around the web:
- IBM bans all removable storage, for all staff, everywhere @ The Register
- Nest warns users to change potentially-pwned passwords @ The Inquirer
- Malicious Chrome Extensions Infect Over 100,000 Users Again @ Slashdot
- Alexa, Google Assistant and Siri can be fooled by 'silent' commands @ The Inquirer
Introduction, Specifications and Packaging
We have been overdue for a Samsung NVMe SSD refresh, and with the launch of their 860 PRO and EVO back in January, folks have been itching for the 970's to come out. The 950 and 960 (PRO) lines were separated by about a year, but we are going on 18 months since the most recent 960 EVO launch. Samsung could afford to wait a bit longer since the 960 line already offered outstanding performance that remained unmatched at the top of our performance charts for a very long time. Recently, drives like the WD Black have started catching up, so it is naturally time for Samsung to keep the competition on their toes:
Today we will look at most of the Samsung 970 PRO and EVO lineup. We have a bit of a capacity spread for the EVO, and a single PRO. Samples are hard to come by so far since Samsung opted to launch both lines at the same time, but we tried to get the more common capacities represented. EVO 2TB and PRO 1TB data will have to come at a later date.
Specs come in at just slightly higher than the 960 lines, with some welcome additions like OPAL and encrypted drive (IEEE1667) support, the latter being suggested but never making it into the 960 products. Another welcome addition is that the 970 EVO now carries a 5-year warranty (up from 3).
The 970 EVO includes 'Intelligent TurboWrite', which was introduced with the 960 line. This setup maintains a static SLC area and an additional 'Intelligent' cache that exists if sufficient free space is available in the TLC area.
Packaging is in line with the previous 960 series parts. Nice packaging. If it ain't broke, don't fix it.
NVMe RAID and StoreMI
With Ken testing all of the new AMD X470 goodness that we had floating around the office here at PCPer, I snuck in some quick storage testing to get a look at just how the new platform handled a typical power user NVMe RAID configuration. We will be testing a few different platform configurations:
- ASUS Z270 w/ 7700K
- 1x SSD behind chipset (PCH)
- 2x SSD (RAID-0) behind chipset (PCH)
- 1x SSD directly connected to CPU
- AMD X470 w/ 2600X
- 1x SSD via RAIDXpert bottom driver
- 2x SSD (RAID-0) via RAIDXpert
- 1x SSD via MS InBox NVMe driver
For the AMD system we tested, all M.2 ports were direct connected to the CPU. This should be the case for most systems since the AMD chipset has only a PCIe 2.0 x4 link which would cut most NVMe SSD bandwidth in half if passed through it. The difference on AMD is that installing the RAIDXpert software also installs a 'bottom driver' which replaces the Windows NVMe driver, while Intel's RST platform handles this process more in the chipset hardware (but is limited to PCIe 3.0 x4 DMI bandwidth). Now onto the results:
Random Read IOPS
For random IO, we see expected scaling from AMD, but do note that IOPS comes in ~40% lower than the same configuration on Intel's platform. This is critical as much of the IO seen in general use is random reads at lower queue depths. We'd like to see AMD doing better here, especially in the case where a single SSD was operating without the interference of the RAIDXpert driver, which was better, but still not able to match Intel.
Random Read Latency
This latency chart should better explain the IOPS performance seen above. Note that the across the board latency increases by ~10us on the X470 platform, followed by another ~20us when switching to the RAIDXpert driver. That combined ~30us is 50% of the 60us QD1 latency seen the Z270 platform (regardless of configuration).
Ok, now we see the AMD platform stretch its legs a bit. Since Intel NVMe RAID is bottlenecked by its DMI link while AMD has all NVMe SSDs directly connected to the CPU, AMD is able to trounce Intel on sequentials, but there is a catch. Note the solid red line, which means no RAIDXpert software. That line tracks as it should, leveling off horizontally at a maximum for that SSD. Now look at the two dashed red lines and note how they fall off at ~QD8/16. It appears the RAIDXpert driver is interfering and limiting the ultimate throughput possible. This was even the case for a single SSD passing through the RAIDXpert bottom driver (configured as a JBOD volume).
AMD has also launched their answer to Intel RST caching. StoreMI is actually a more flexible solution that offers some unique advantages over Intel. Instead of copying a section of HDD data to the SSD cache, StoreMI combines the total available storage space of both the HDD and SSD, and is able to seamlessly shuffle the more active data blocks to the SSD. StoreMI also offers more cache capacity than Intel - up to 512GB SSD caches are possible (60GB limit on Intel). Lastly, the user can opt to donate 2GB of RAM as an additional caching layer.
AMD claims the typical speedups that one would expect with an SSD caching a much slower HDD. We have done some testing with StoreMI and can confirm the above slide's claims. Actively used applications and games end up running at close to SSD speeds (after the first execution, which comes from the HDD). StoreMI is not yet in a final state, but that is expected within the next week or two. We will revisit that topic with hard data once we have the final shipping product on-hand.