Subject: Graphics Cards, Motherboards | August 29, 2016 - 01:20 AM | Scott Michaud
Tagged: pcie, PCI SIG
Last week, various outlets were reporting (incorrectly) that PCIe 4.0 would provide “at least 300W” through the slot. This would have been roughly equal to the power draw that a PCIe 3.0 GPU could provide with an extra six-pin and an extra eight-pin power connector, but do so all through the slot.
Later, the PCI-SIG contacted Tom's Hardware (and likely others) to say that this is not the case. The slot will still only provide 75W of power; any other power will still need to come from external connectors. The main advantage of the standard will be extra bandwidth, about double that of PCIe 3.0, not easing cable management or making it easier to design a graphics card (by making it harder to design a motherboard).
Introduction, Specifications and Packaging
It's been too long since we took a look at enterprise SSDs here at PC Perspective, so it's high time we get back to it! The delay has stemmed from some low-level re-engineering of our test suite to unlock some really cool QoS and Latency Percentile possibilities involving PACED workloads. We've also done a lot of work to distill hundreds of hours of test results into fewer yet more meaningful charts. More on that as we get into the article. For now, let's focus on today's test subject:
Behold the Micron 9100 MAX Series. Inside that unassuming 2.5" U.2 enclosure sits 4TB of flash and over 4GB of DRAM. It's capable of 3 GB/s reads, 2 GB/s writes, and 750,000 IOPS. All from inside that little silver box! There's not a lot more to say here because nobody is going to read much past that 3/4 MILLION IOPS figure I just slipped, so I'll just get into the rest of the article now :).
The 9100's come in two flavors and form factors. The MAX series (1.2TB and 2.4TB in the above list) come with very high levels of performance and endurance, while the PRO series comes with lower overprovisioning, enabling higher capacity points for a given flash loadout (800GB, 1.6TB, 3.2TB). Those five different capacity / performance points are available in both PCIe (HHHL) and U.2 (2.5") form factors, making for 10 total available SKUs. All products are PCIe 3.0 x4, using NVMe as their protocol. They should all be bootable on systems capable of UEFI/NVMe BIOS enumeration.
Idle power consumption is a respectable 7W, while active consumption is selectable in 20W, 25W, and 'unlimited' increments. While >25W operation technically exceeds the PCIe specification for non-GPU devices, we know that the physical slot is capable of 75W for GPUs, so why can't SSDs have some more fun too! That said, even in unlimited mode, the 9100's should still stick relatively close to 25W and in our testing did not exceed 29W at any workload. Detailed power testing is coming to future enterprise articles, but for now, the extent will be what was measured and noted in this paragraph.
Our 9100 MAX samples came only in anti-static bags, so no fancy packaging to show here. Enterprise parts typically come in white/brown boxes with little flair.
Subject: Storage | June 13, 2016 - 03:46 AM | Allyn Malventano
Tagged: XPoint, tlc, Stony Beach, ssd, pcie, Optane, NVMe, mlc, Mansion Beach, M.2, kaby lake, Intel, imft, Brighton Beach, 3DNAND, 3d nand
For those unaware, XPoint (spoken 'cross-point') is a new type of storage technology that is persistent like NAND Flash but with speeds closer to that of RAM. Intel's brand name for devices implementing XPoint are called Optane.
Starting at the bottom of the slide, we see a new 'System Acceleration' segment with a 'Stony Beach PCIe/NVMe m.2 System Accelerator'. This is likely a new take on Larson Creek, which was a 20GB SLC SSD launched in 2011. This small yet very fast SLC flash was tied into the storage subsystem via Intel's Rapid Storage Technology and acted as a caching tier for HDDs, which comprised most of the storage market at that time. Since Optane excels at random access, even a PCIe 3.0 x2 part could outmaneuver the fastest available NAND, meaning these new System Accelerators could act as a caching tier for Flash-based SSDs or even HDDs. These accelerators can also be good for boosting the performance of mobile products, potentially enabling the use of cheaper / lower performing Flash / HDD for bulk storage.
Skipping past the mainstream parts for now, enthusiasts can expect to see Brighton Beach and Mansion Beach, which are Optane SSDs linked via PCIe 3x2 or x4, respectively. Not just accelerators, these products should have considerably more storage capacity, which may bring costs fairly high unless either XPoint production is very efficient or if there is also NAND Flash present on those parts for bulk storage (think XPoint cache for NAND Flash all in one product).
We're not sure if or how the recent delays to Kaby Lake will impact the other blocks on the above slide, but we do know that many of the other blocks present are on-track. The SSD 540s and 5400s were in fact announced in Q2, and are Intel's first shipping products using IMFT 3D NAND. Parts not yet seen announced are the Pro 6000p and 600p, which are long overdue m.2 SSDs that may compete against Samsung's 950 Pro. Do note that those are marked as TLC products (purple), though I suspect they may actually be a hybrid TLC+SLC cache solution.
Going further out on the timeline we naturally see refreshes to all of the Optane parts, but we also see the first mention of second-generation IMFT 3DNAND. As I hinted at in an article back in February, second-gen 3D NAND will very likely *double* the per-die capacity to 512Gbit (64GB) for MLC and 768Gbit (96GB) for TLC. While die counts will be cut in half for a given total SSD capacity, speed reductions will be partially mitigated by this flash having at least four planes per die (most previous flash was double-plane). A plane is an effective partitioning of flash within the die, with each section having its own buffer. Each plane can perform erase/program/read operations independently, and for operations where the Flash is more limiting than the interface (writes), doubling the number of planes also doubles the throughput. In short, doubling planes roughly negates the speed drop caused by halving the die count on an SSD (until you reach the point where controller-to-NAND channels become the bottleneck, of course).
IMFT XPoint Die shot I caught at the Intel / Micron launch event.
Well, that's all I have for now. I'm excited to see that XPoint is making its way into consumer products (and Storage Accelerators) within the next year's time. I certainly look forward to testing these products, and I hope to show them running faster than they did back at that IDF demo...
Subject: Storage | May 27, 2016 - 02:42 PM | Jeremy Hellstrom
Tagged: TSV, toshiba, ssd, revodrive, RD400, pcie, ocz, NVMe, M.2, HHHL, 512GB, 2280, 15nm
If you somehow felt that there was a test that Al missed while reviewing the OCZ RD400 NVMe SSD, then you have a chance for a second look. There are several benchmarks which The SSD Review ran which were not covered and they have a different way of displaying data such as latency but the end results are the same, this drive is up there with the Samsung 950 Pro and Intel 750 Series. Read all about it here.
"With specs that rival the Samsung 950 Pro, a capacity point that nips at the heels of the Intel 750's largest model, and competitive MSRPs, the OCZ RD400 is out for blood. Read on to learn more about this latest enthusiast class NVMe SSD and see how it competes with the best of the best!"
Here are some more Storage reviews from around the web:
- Toshiba OCZ RD400 NVMe PCIe SSD 512GB @ Kitguru
- OCZ Trion 150 480GB SSD Review @ OCC
- Mushkin Atom 128GB USB 3.0 Flash Drive Review @ NikKTech
- Kingston DataTraveler 4000 G2 64GB Encrypted USB Drive Review @ OCC
- Asustor AS6104T 4-bay NAS @ Kitguru
- Thecus N5810 PRO NAS @ Kitguru
Introduction, Specifications and Packaging
The OCZ RevoDrive has been around for a good long while. We looked at the first ever RevoDrive back in 2010. It was a bold move for the time, as PCIe SSDs were both rare and very expensive at that time. OCZ's innovation was to implement a new VCA RAID controller which kept latencies low and properly scaled with increased Queue Depth. OCZ got a lot of use out of this formula, later expanding to the RevoDrive 3 x2 which expanded to four parallel SSDs, all the way to the enterprise Z-Drive R4 which further expanded that out to eight RAIDed SSDs.
OCZ's RevoDrive lineup circa 2011.
The latter was a monster of an SSD both in physical size and storage capacity. Its performance was also impressive given that it launched five years ago. After being acquired by Toshiba, OCZ re-spun the old VCA-driven SSD one last time in the form of a RevoDrive 350, but it was the same old formula and high-latency SandForce controllers (updated with in-house Toshiba flash). The RevoDrive line needed to ditch that dated tech and move into the world of NVMe, and today it has!
Here is the new 'Toshiba OCZ RD400', branded as such under the recent rebadging that took place on OCZ's site. The Trion 150 and Vertex 180 have also been relabeled as TR150 and VT180. This new RD400 has some significant changes over the previous iterations of that line. The big one is that it is now a lean M.2 part which can come on/with an optional adapter card for those not having an available M.2 slot.
Subject: Storage | March 8, 2016 - 03:07 PM | Allyn Malventano
Tagged: ssd, Seagate, pcie, NVMe, flash drive
Today Seagate announced that they are production ready on a couple of NVMe PCIe SSD models. These are data-center tailored units that focus on getting as much parallel flash into as small of a space as possible. From engineering drawings, the first appears to be a half height (HHHL) device, communicates over a PCIe 3.0 x8 link, and reaches a claimed 6.7GB/s:
The second model is a bit more interesting for a few reasons. This is a PCIe 3.0 x16 unit (same lane configuration as a high end GPU) that claims 10 GB/s:
10 GB/s, hmm, where have I seen that before? :)
The second image gives away a bit of what may be going on under that heatsink. There appears to be four M.2 form factor SSDs in there, which would imply that it would appear as four separate NVMe devices. This is no big deal for enterprise data applications that can be pointed at multiple physical devices, but that 10 GB/s does start to make more sense (as a combined total) as we know of no single SSD controller capable of that sort of throughput. It took four Intel SSD 750’s for us to reach that same 10 GB/s figure, so it stands to reason that Seagate would use that same trick, only with M.2 SSDs they can fit everything onto a single slot card.
That’s all we have on this release so far, but we may see some real product pics sneak out of the Open Compute Project Summit, running over the next couple of days.
NVMe was a great thing to happen to SSDs. The per-IO reduction in latency and CPU overhead was more than welcome, as PCIe SSDs were previously using the antiquated AHCI protocol, which was a carryover from the SATA HDD days. With NVMe came additional required support in Operating Systems and UEFI BIOS implementations. We did some crazy experiments with arrays of these new devices, but we were initially limited by the lack of native hardware-level RAID support to tie multiple PCIe devices together. The launch of the Z170 chipset saw a remedy to this, by including the ability to tie as many as three PCIe SSDs behind a chipset-configured array. The recent C600 server chipset also saw the addition of RSTe capability, expanding this functionality to enterprise devices like the Intel SSD P3608, which was actually a pair of SSDs on a single PCB.
Most Z170 motherboards have come with one or two M.2 slots, meaning that enthusiasts wanting to employ the 3x PCIe RAID made possible by this new chipset would have to get creative with the use of interposer / adapter boards (or use a combination of PCI and U.2 connected Intel SSD 750s). With the Samsung 950 Pro available, as well as the slew of other M.2 SSDs we saw at CES 2016, it’s safe to say that U.2 is going to push back into the enterprise sector, leaving M.2 as the choice for consumer motherboards moving forward. It was therefore only a matter of time before a triple-M.2 motherboard was launched, and that just recently happened - Behold the Gigabyte Z170X-SOC Force!
This new motherboard sits at the high end of Gigabyte’s lineup, with a water-capable VRM cooler and other premium features. We will be passing this board onto Morry for a full review, but this piece will be focusing on one section in particular:
I have to hand it to Gigabyte for this functional and elegant design choice. The space between the required four full length PCIe slots makes it look like it was chosen to fit M.2 SSDs in-between them. I should also note that it would be possible to use three U.2 adapters linked to three U.2 Intel SSD 750s, but native M.2 devices makes for a significantly more compact and consumer friendly package.
With the test system set up, let’s get right into it, shall we?
Subject: Storage | November 26, 2015 - 07:13 PM | Jeremy Hellstrom
Tagged: M.2, pcie, sata 6Gbs, Silverstone, ECM20
That's right, no matter if you run iOS, Linux or Windows if you have an M key style M.2 port the Silverstone ECM20 add-on card will give you both a PCI-e M.2 slot and a SATA 6Gbs M.2 slot. The board itself is a mounting point, no controller but simply a way to transfer data from a PCI-e M.2 card or a mount for a SATA style card, you provide the cable. The simplcity ensures that your transfer speeds will match what you would expect from a native slot as the tests at Benchmark Reviews show. At less than $20 it is a great way to expand your high speed storage capacity.
"The m.2 form factor is becoming popular for SSDs due to its small size, and, in PCI-E guise, superior performance. As our recent test of the Samsung 950 Pro m.2 SSD has shown, PCI-E m.2 SSDs offer performance many times that of the very best SATA SSDs, so if you’re looking for a storage upgrade, m.2 is definitely the way to go."
Here are some more Storage reviews from around the web:
- ADATA XPG SX930 and Premier SP550 240GB SATA 6G SSD Review @HiTech Legion
- HyperX Predator 240GB M.2 PCIe SSD Review @ Hardware Asylum
- Understanding M.2 RAID NVMe Boot and 2/3x M.2 NVME RAID0 Tested @ The SSD Review
- Synology DS715 2-bay NAS @ techPowerUp
- QNAP TS-563 Network Attached Storage @ Modders-Inc
- ASUSTOR AS1004T 4-bay NAS Review @ Madshrimps
- Seagate Enterprise NAS 8TB SATA III HDD Review @ NikKTech
Introduction, Specifications and Packaging
What's better than an 18-channel NVMe PCIe Datacenter SSD controller in a Half Height Half Length (HHHL) package? *TWO* 18-channel NVMe PCIe Datacenter controllers in a HHHL package! I'm sure words to this effect were uttered in an Intel meeting room some time in the past, because such a device now exists, and is called the SSD DC P3608:
The P3608 is essentially a pair of P3600's glued together on a single PCB, much like how some graphics cards merge a pair of GPUs to act with the performance of a pair of cards combined into a single one:
What is immediately impressive here is that Intel has done this same trick within 1/4 of the space (HHHL compared to a typical graphics card). We can only imagine the potential of a pair of P3600 SSDs, so lets get right into the specs, disassembly, and testing!
Subject: Storage | September 22, 2015 - 02:39 AM | Allyn Malventano
Tagged: vnand, V-NAND, ssd, Samsung, pcie, NVMe, M.2 2280, M.2, 950 PRO, 512GB, 256GB
Samsung’s new product launching will be called the 950 PRO. This will be an M.2 2280 form factor product running at PCIe 3.0 x4. Equipped with Samsung’s 32-layer V-NAND and using the NVMe protocol enabled by a new UBX controller, the 950 PRO will be capable of up to an impressive 300,000 random read IOPS. Random writes come in at 110,000 IOPS and sequential throughputs are expected to be 2.5 GB/sec reads and 1.5 GB/sec for writes. Available capacities will be 256GB and 512GB.
- 256GB - $199.99 ($0.78/GB)
- 512GB - $349.99 ($0.68/GB)
- 1TB - (early next year with the switch to 48-layer V-NAND)
The 950 PRO will be shipping with a 5-year warranty rated at 200 terabytes written for the 256GB model and 400 TBW for the 512GB. That works out to just over 100GB per day for both capacities.
These hit retail in October and we currently have samples in hand for testing.
(for those curious, both capacities only have components on the front side of the PCB)