All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction, Specifications, and Packaging
Back in January of 2016, Samsung launched the Portable SSD T1. This was a good way to get more of their VNAND flash out into the market in the form of a speedy and portable USB connected SSD. The launch went so well that they followed it up with the T3 in early 2016. While the T1 maxed out at 1TB of capacity, the T2 pushed that to 2TB, which remains the market sweet spot for max portable capacity today. As increased flash densities come out, it became time for Samsung to refresh the lineup:
Meet the Samsung Portable SSD T5. This new version is ever so slightly smaller than the T3, while packing a 256Gbit die version of Samsung's 64-layer VNAND, along with a newer USB controller that should help get closer to the internal SATA 6Gbit speed of the device.
Most specs are nearly identical to the T3, with a notable increase to 540MB/s throughput, thanks to the faster interface capability.
Straightforward packaging with a notable inclusion of both Type-C to A and C to C cables. The T3 and T1 came with only Type-A.
Introduction, Specifications and Packaging
Today Corsair launched their first ever HHHL form factor SSD, the NX500:
Just from the looks of this part, it is clear they were pulling out all the stops with respect to product design. This is certainly one of the most impressive looking SSDs we have seen come through our lab, and it will certainly be the type of thing enthusiasts would show off in their system builds. The NX500 is also likely to be the best showcase of Phison's new E7 controller. I'm just as eager to see if this SSD performs as well as it looks, so let's get to the review!
The specifications here are in line what we would expect for a modern day NVMe SSD. Note that ratings are identical for the 400GB and 800GB models, aside from a doubling of endurance due to the corresponding doubling of flash. There were some additional details in our press kit:
Extreme PerformanceThe Phison PS5007-E7• Description: PS5007-E7 is Phison’s first NVMe controller designed for high performance application. Supporting up to 8-channels in its NAND Flash interface.Extreme ReliabilityMultiple features are built into the PS5007-E7 to ensure stability and reliability.• SmartECC™ – Reconstructs defective/faulty pages when regular ECC fails• SmartRefresh™ – Monitors block ECC health status and refreshes blocks periodically to improve data retention• SmartFlush™ – Minimizes time data spends in cache to ensure data retention in the event of power lossExtreme ControlThe Neutron NX500 SSD with Phison PS5007-E7 controller works with CORSAIR SSD Toolbox.• Drive monitoring – Monitor the health of your Force Series• Secure wipe – For security purposes, completely clear the drive of any recoverable data• Firmware update – Install updated firmware as needed
As the Phison E7 is a new controller, it's worth taking a look at the internals:
Highlights from above are 8 channels to the flash, ONFI 3.2 and Toggle 2.0 support (covering most flash memory types), along with support for all modes (SLC/MLC/TLC).
I haven't seen SSD packaging this nice since the FusionIO ioDrive, and those parts were far more expensive. Great touch here by Corsair.
Introduction, Specifications and Packaging
Today Intel is launching a new line of client SSDs - the SSD 545S Series. These are simple, 2.5" SATA parts that aim to offer good performance at an economical price point. Low-cost SSDs is not typically Intel's strong suit, mainly because they are extremely rigorous on their design and testing, but the ramping up of IMFT 3D NAND, now entering its second generation stacked to 64-layers, should finally help them get the cost/GB down to levels previously enjoyed by other manufacturers.
Intel and Micron jointly announced 3D NAND just over two years ago, and a year ago we talked about the next IMFT capacity bump coming as a 'double' move. Well, that's only partially happening today. The 545S line will carry the new IMFT 64-layer flash, but the capacity per die remains the same 256Gbit (32GB) as the previous generation parts. The dies will be smaller, meaning more can fit on a wafer, which drives down production costs, but the larger 512Gbit dies won't be coming until later on (and in a different product line - Intel told us they do not intend to mix die types within the same lines as we've seen Samsung do in the past).
There are no surprises here, though I am happy to see a 'sustained sequential performance' specification stated by an SSD maker, and I'm happier to see Intel claiming such a high figure for sustained writes (implying this is the TLC writing speed as the SLC cache would be exhausted in sustained writes).
I'm also happy to see sensical endurance specs for once. We've previously seen oddly non-scaling figures in prior SSD releases from multiple companies. Clearly stating a specific TBW 'per 128GB' makes a lot of sense here, and the number itself isn't that bad, either.
Simplified packaging from Intel here, apparently to help further reduce shipping costs.
Introduction, How PCM Works, Reading, Writing, and Tweaks
I’ve seen a bit of flawed logic floating around related to discussions about 3D XPoint technology. Some are directly comparing the cost per die to NAND flash (you can’t - 3D XPoint likely has fewer fab steps than NAND - especially when compared with 3D NAND). Others are repeating a bunch of terminology and element names without taking the time to actually explain how it works, and far too many folks out there can't even pronounce it correctly (it's spoken 'cross-point'). My plan is to address as much of the confusion as I can with this article, and I hope you walk away understanding how XPoint and its underlying technologies (most likely) work. While we do not have absolute confirmation of the precise material compositions, there is a significant amount of evidence pointing to one particular set of technologies. With Optane Memory now out in the wild and purchasable by folks wielding electron microscopes and mass spectrometers, I have seen enough additional information come across to assume XPoint is, in fact, PCM based.
XPoint memory. Note the shape of the cell/selector structure. This will be significant later.
While we were initially told at the XPoint announcement event Q&A that the technology was not phase change based, there is overwhelming evidence to the contrary, and it is likely that Intel did not want to let the cat out of the bag too early. The funny thing about that is that both Intel and Micron were briefing on PCM-based memory developments five years earlier, and nearly everything about those briefings lines up perfectly with what appears to have ended up in the XPoint that we have today.
Some die-level performance characteristics of various memory types. source
The above figures were sourced from a 2011 paper and may be a bit dated, but they do a good job putting some actual numbers with the die-level performance of the various solid state memory technologies. We can also see where the ~1000x speed and ~1000x endurance comparisons with XPoint to NAND Flash came from. Now, of course, those performance characteristics do not directly translate to the performance of a complete SSD package containing those dies. Controller overhead and management must take their respective cuts, as is shown with the performance of the first generation XPoint SSD we saw come out of Intel:
The ‘bridging the gap’ Latency Percentile graph from our Intel SSD DC P4800X review.
(The P4800X comes in at 10us above).
There have been a few very vocal folks out there chanting 'not good enough', without the basic understanding that the first publicly available iteration of a new technology never represents its ultimate performance capabilities. It took NAND flash decades to make it into usable SSDs, and another decade before climbing to the performance levels we enjoy today. Time will tell if this holds true for XPoint, but given Micron's demos and our own observed performance of Intel's P4800X and Optane Memory SSDs, I'd argue that it is most certainly off to a good start!
A 3D XPoint die, submitted for your viewing pleasure (click for larger version).
Intro and Upgrading the PS4 Pro Hard Drive
When Sony launched the PS4 Pro late last year, it introduced an unusual mid-cycle performance update to its latest console platform. But in addition to increased processing and graphics performance, Sony also addressed one of the original PS4's shortcomings: the storage bus.
The original, non-Pro PlayStation 4 utilized a SATA II bus, capping speeds at 3Gb/s. This was more than adequate for keeping up with the console's stock hard drive, but those who elected to take advantage of Sony's user-upgradeable storage policy and install an SSD faced the prospect of a storage bus bottleneck. As we saw in our original look at upgrading the PS4 Pro with a solid state drive, the SSD brought some performance improvements in terms of load times, but these improvements weren't always as impressive as we might expect.
We therefore set out to see what performance improvements, if any, could be gained by the inclusion of SATA III in the PS4 Pro, and if this new Pro model makes a stronger case for users to shell out even more cash for a high capacity solid state drive. We weren't the only ones interested in this test. Digital Foundry conducted their own tests of the PS4 Pro's SATA III interface. They found that while a solid state drive in the PS4 Pro clearly outperformed the stock hard drive in the original PS4, it generally didn't offer much improvement over the SATA II-bottlenecked SSD in the original PS4, or even, in some cases, the stock HDD in the PS4 Pro.
But we noticed a major issue with Digital Foundry's testing process. For their SSD tests, they used the OCZ Trion 100, an older SSD with relatively mediocre performance compared to its latest competitors. The Trion 100 also has a relatively low write endurance and we therefore don't know the condition and performance characteristics of Digital Foundry's drive.
To address these issues, we conducted our tests with a brand new 1TB Samsung 850 EVO. While far from the cheapest, or even most reasonable option for a PS4 Pro upgrade, our aim is to assess the "best case scenario" when it comes to SSD performance via the PS4 Pro's SATA III bus.
Introduction, Specifications, and Requirements
Finally! Optane Memory sitting in our lab! Sure, it’s not the mighty P4800X we remotely tested over the past month, but this is right here, sitting on my desk. It’s shipping, too, meaning it could be sitting on your desk (or more importantly, in your PC) in just a matter of days.
The big deal about Optane is that it uses XPoint Memory, which has fast-as-lightning (faster, actually) response times of less than 10 microseconds. Compare this to the fastest modern NAND flash at ~90 microseconds, and the differences are going to add up fast. What’s wonderful about these response times is that they still hold true even when scaling an Optane product all the way down to just one or two dies of storage capacity. When you consider that managing fewer dies means less work for the controller, we can see latencies fall even further in some cases (as we will see later).
Introduction and Specifications
XPoint. Optane. QuantX. We've been hearing these terms thrown around for two years now. A form of 3D stackable non-volatile memory that promised 10x the density of DRAM and 1000x the speed and endurance of NAND. These were bold statements, and over the following months, we would see them misunderstood and misconstrued by many in the industry. These misconceptions were further amplified by some poor demo choices on the part of Intel (fortunately countered by some better choices made by Micron). Fortunately cooler heads prevailed as Jim Handy and other industry analysts helped explain that a 1000x improvement at the die level does not translate to the same improvement at the device level, especially when the first round of devices must comply with what will soon become a legacy method of connecting a persistent storage device to a PC.
Did I just suggest that PCIe 3.0 and the NVMe protocol - developed just for high-speed storage, is already legacy tech? Well, sorta.
That 'Future NVM' bar at the bottom of that chart there was a 2-year old prototype iteration of what is now Optane. Note that while NVMe was able to shrink down the yellow bar a bit, as you introduce faster and faster storage, the rest of the equation (meaning software, including the OS kernel) starts to have a larger and larger impact on limiting the ultimate speed of the device.
NAND Flash simplified schematic (via Wikipedia)
Before getting into the first retail product to push all of these links in the storage chain to the limit, let's explain how XPoint works and what makes it faster. Taking random writes as an example, NAND Flash (above) must program cells in pages and erase cells in blocks. As modern flash has increased in capacity, the sizes of those pages and blocks have scaled up roughly proportionally. At present day we are at pages >4KB and block sizes in the megabytes. When it comes to randomly writing to an already full section of flash, simply changing the contents of one byte on one page requires the clearing and rewriting of the entire block. The difference between what you wanted to write and what the flash had to rewrite to accomplish that operation is called the write amplification factor. It's something that must be dealt with when it comes to flash memory management, but for XPoint it is a completely different story:
XPoint is bit addressible. The 'cross' structure means you can select very small groups of data via Wordlines, with the ultimate selection resolving down to a single bit.
Since the programmed element effectively acts as a resistor, its output is read directly and quickly. Even better - none of that write amplification nonsense mentioned above applies here at all. There are no pages or blocks. If you want to write a byte, go ahead. Even better is that the bits can be changed regardless of their former state, meaning no erase or clear cycle must take place before writing - you just overwrite directly over what was previously stored. Is that 1000x faster / 1000x more write endurance than NAND thing starting to make more sense now?
Ok, with all of the background out of the way, let's get into the meat of the story. I present the P4800X:
Build and Upgrade Components
Spring is in the air! And while many traditionally use this season for cleaning out their homes, what could be the point of reclaiming all of that space besides filling it up again with new PC hardware and accessories? If you answered, "there is no point, other than what you just said," then you're absolutely right. Spring a great time to procrastinate about housework and build up a sweet new gaming PC (what else would you really want to use that tax return for?), so our staff has listed their favorite PC hardware right now, from build components to accessories, to make your life easier. (Let's make this season far more exciting than taking out the trash and filing taxes!)
While our venerable Hardware Leaderboard has been serving the PC community for many years, it's still worth listing some of our favorite PC hardware for builds at different price points here.
Processors - the heart of the system.
No doubt about it, AMD's Ryzen CPU launch has been the biggest news of the year so far for PC enthusiasts, and while the 6 and 4-core variants are right around the corner the 8-core R7 processors are still a great choice if you have the budget for a $300+ CPU. To that end, we really like the value proposition of the Ryzen R7 1700, which offers much of the performance of its more expensive siblings for a really compelling price, and can potentially be overclocked to match the higher-clocked members of the Ryzen lineup, though moving up to either the R7 1700X or R7 1800X will net you higher clocks (without increasing voltage and power draw) out of the box.
Really, any of these processors are going to provide a great overall PC experience with incredible multi-threaded performance for your dollar in many applications, and they can of course handle any game you throw at them - with optimizations already appearing to make them even better for gaming.
Don't forget about Intel, which has some really compelling options starting even at the very low end (Pentium G4560, when you can find one in stock near its ~$60 MSRP), thanks to their newest Kaby Lake CPUs. The high-end option from Intel's 7th-gen Core lineup is the Core i7-7700K (currently $345 on Amazon), which provides very fast gaming performance and plenty of power if you don't need as many cores as the R7 1700 (or Intel's high-end LGA-2011 parts). Core i5 processors provide a much more cost-effective way to power a gaming system, and an i5-7500 is nearly $150 less than the Core i7 while providing excellent performance if you don't need an unlocked multiplier or those additional threads.
Introduction and Packaging
Data Robotics shipped their first product 10 years ago. Dubbed the Drobo (short for Data Robot), it was a 4-bay hot-swappable USB 2.0 connected external storage device. At a time where RAID was still a term mostly unknown to typical PC users, the Drobo was already pushing the concept of data redundancy past what those familiar with RAID were used to. BeyondRAID offered a form of redundant data storage that decoupled rigid RAID structures from fixed capacity disk packs. While most RAID volumes were 'dumb', BeyondRAID was aware of what was stored within its partitions, distributing that data in block format across the available disks. This not only significantly speed up rebuilding (only used portions of the disks need be recopied), it allowed for other cool tricks like the ability to mix drive capacities within the same array. Switching between parity levels could also be done on-the-fly and with significantly less effort than traditional RAID migrations.
While all of the above was great, the original Drobo saw performance hits from its block level management, which was limited by the processing overhead combined with the available processing power for such a device at the time. The first Drobo model was lucky to break 15 MB/s, which could not even fully saturate a USB 2.0 link. After the launch, requests for network attached capability led to the launch of the DroboShare, which could act as a USB to ethernet bridge. It worked but was still limited by the link speed of the connected Drobo. A Drobo FS launched a few years later, but it was not much quicker. Three years after that we got the 5N, which was finally a worthy contender in the space.
10 years and nearly a dozen models later, we now have the Drobo 5N2, which will replace the aging 5N. The newer model retains the same 5-bay form factor and mSATA bay for optional SSD cache but adds a second bondable Gigabit Ethernet port and upgrades most of the internals. Faster hardware specs and newer more capable firmware enables increased throughput and volume sizes up to 64TB. Since BeyondRAID is thin provisioned, you always make the volume as large as it can be and simply add disk capacity as the amount of stored content grows over time.
Today Samsung released an update to their EVO+ microSD card line. The new model is the 'EVO Plus'. Yes, I know, it's confusing to me as well, especially when trying to research the new vs. old iterations for this mini-review. Here's a few quick visual comparisons between both models:
On the left, we have the 'older' version of the Plus (I mean the '+'), while on the right we have the new plus, designated as a '2017 model' on the Samsung site. Note the rating differences between the two. The '+' on the left is rated at UHS-I U1 (10 MB/s minimum write speed), while the newer 'Plus' version is rated at UHS-I U3 (30 MB/s minimum write speed). I also ran across what looked like the older version packaging.
The packaging on the right is what we had in hand for this review. The image on the left was found at the Samsung website, and confuses things even further, as the 'Plus' on the package does not match the markings on the card itself ('+'). It looks as if Samsung may have silently updated the specs of the 256GB '+' model at some point in the recent past, as that model claims significantly faster write speeds (90 MB/s) than the older/other '+' models previously claimed (~20 MB/s). With that confusion out of the way, let's dig into the specs of this newest EVO Plus:
For clarification on the Speed Class and Grade, I direct you to our previous article covering those aspects in detail. For here I'll briefly state that the interface can handle 104 MB/s while the media itself is required to sustain a minimum of 30 MB/s of typical streaming recorded content. The specs go on to claim 100MB/s reads and 90 MB/s writes (60 MB/s for the 64GB model). Doing some quick checks, here's what I saw with some simple file copies to and from a 128GB EVO Plus:
Our figures didn't exceed the specified performance, but they came close, which more than satisfies their 'up to' claim, with over 80 MB/s writes and 93 MB/s reads. I was able to separately confirm 85-89 MB/s writes and 99 MB/s reads with Iometer accessing with 128KB sequential transfers.
- 32GB: $29.99
- 64GB: $49.99
- 128GB: $99.99
- 256GB: coming soon (but there is already a 256GB EVO+ of similar specs???)
Pricing seems to be running a bit high on these, with pricing running close to double of the previous version of this very same part (the EVO+ 128GB can be found for $50 at the time of this writing). Sure you are getting a U3 rated card with over four times the achievable write speed, but the reads are very similar, and if your camera only requires U1 speeds, the price premium does not seem to be worthwhile. It is also worth noting that even faster UHS-II spec cards that transfer at 150 MB/s can be had and even come with a reader at a lower cost.
In summary, the Samsung EVO Plus microSD cards look to be decent performers, but the pricing needs to come down some to be truly competitive in this space. I'd also like to see the product labeling and marketing a bit more clear between the '+' and the 'Plus' models, as they can easily confuse those not so familiar with SD card classes and grades. It also makes searching for them rather difficult, as most search engines parse 'Plus' interchangeably with '+', adding to the potential confusion.
The Need for Speed
Around here storage is Allyn’s territory, but I decided to share my experience with a new $20 flash drive I picked up that promised some impressive speeds via USB 3.0. The drive is the Lexar JumpDrive P20, and I bought the 32GB version, which is the lowest capacity of the three drives in the series. 64GB and 128GB versions of the JumpDrive P20 are available, with advertised speeds of up to 400 MB/s from all three, and reads and up to 270 MB/s writes - if you buy the largest capacity.
My humble 32GB model still boasts up to 140 MB/s writes, which would be faster than any USB drive I’ve ever owned (my SanDisk Extreme USB 3.0 16GB drive is limited to 60 MB/s writes, and can hit about 190 MB/s reads), and the speeds of the P20 even approach that of some lower capacity SATA 3 SSDs - if it lives up to the claims. The price was right, so I took the plunge. (My hard-earned $20 at stake!)
Size comparison with other USB flash drives on hand (P20 on far right)
First we'll look at the features from Lexar:
- Among the fastest USB flash drives available, with speeds up to 400MB/s read and 270MB/s write
- Sleek design with metal alloy base and high-gloss mirror finish top
- Securely protects files using EncryptStick Lite software, an advanced security solution with 256-bit AES encryption
- Reliably stores and transfers files, photos, videos, and more
- High-capacity options to store more files on the go
- Compatible with PC and Mac systems
- Backwards compatible with USB 2.0 devices
- Limited lifetime warranty
Introduction, Specifications and Packaging
Micron paper launched their 5100 Series Enterprise SATA SSD lineup early last month. The new line promised many sought after features for such a part, namely high performance, high-performance consistency, high capacities, and relatively low cost/GB (thanks to IMFT 3D NAND which is now well into volume production since launching nearly two years ago). The highs and lows I just rattled off are not only good for enterprise, they are good for general consumers as well. Enterprises deal in large SSD orders, which translates to increased production and ultimately a reduction in the production cost of the raw NAND that also goes into client SSDs and other storage devices.
The 5100 Series comes in three tiers and multiple capacities per tier (with even more launching over the next few months). Micron sampled us a 2TB 'ECO' model and a 1TB 'MAX'. The former is optimized more for read intensive workloads, while the latter is designed to take a continuous random write beating.
I'll be trying out some new QoS tests in this review, with plans to expand out with comparisons in future pieces. This review will stand as a detailed performance verification of these two parts - something we are uniquely equipped to accomplish.
Introduction, Specifications and Packaging
Earlier this year we covered the lower two capacities of the Samsung 750 EVO. We had some requests for a review of the 500GB model as soon as it was added to their lineup, and Samsung promptly sent a sample, but I delayed that review in the interest of getting the full 750 EVO lineup tested under our new storage test suite. I've been running batches of SSDs through this new suite, and we now have enough data points to begin cranking out some reviews. The 750 EVO was at the head of the line, so we will be starting with it first. I'm 'reissuing' our review as a full capacity roundup of the 750 EVO lineup as these are fresh results on a completely new test suite.
These are the 'Rev. 2' specifications from Samsung, which include the 500GB model of the 750 EVO. The changes are not significant, mainly a slight bump to random performance of the top capacity model along with a changeover to lower power DDR3 (of twice the capacity) for the 500GB model's system cache.
Nothing new here. This is the standard Samsung packaging for their SATA products.
Introduction, Specifications and Packaging
Since Samsung’s announcement of the 960 Series SSDs, I have been patiently waiting not for the 960 PRO (reviewed a few weeks back), but for the 960 EVO. It is the EVO, in my opinion, that is the big release here. Sure, it doesn’t have the quad Hexadecimal Die Packages, Package-on-Package DRAM and ultimate higher capacity of the PRO, but what it *does* potentially have is class leading performance / price in the M.2 form factor. Just as we all wanted lower cost SSDs in the 2.5” SATA form factor, M.2 is seeing greater adoption across laptops and desktop motherboards, and it’s high time we started seeing M.2 SSDs come down in price.
I know, don’t tell me, the Intel 600p carries a SATA-level cost/GB in an M.2 form factor. Sure that’s great, and while I do recommend that SSD for those on a budget, its caching scheme comes with some particularly nasty inconsistencies in sustained writes that may scare off some power users. Samsung 840/850 EVO SSDs have historically handled the transitions between SLC cache and TLC bulk writes far better than any competing units, and I’ve eagerly anticipated the chance to see how well their implementation carries over to an NVMe SSD. Fortunately for us, that day is today:
An important point to note in the performance specs - the lowest capacity model is the only one to see its performance significantly taper in stated specifications. That is because even with its 48-layer VNAND operating in SLC mode, there are only two packages on all 960 EVOs and the 250GB capacity comes equipped with the fewest dies to spread the work across. Less parallelism leads to lower ultimate performance. Still, it is impressive to see only 250GB of flash reaching near saturation of PCIe 3.0 x4 in reads.
I've appended the 'sustained' (TLC) performance specs at the bottom of the above chart. These 'after TurboWrite' figures are the expected performance after the SLC cache has been depleted. This is nearly impossible in actual usage scenarios, as it is extremely difficult for any typical (or even power user) desktop workloads to write fast and long enough to deplete such a cache, especially considering how much larger these caches are compared to prior models.
Samsung has carried forward their simple packaging introduced with the 960 PRO. The felt pad on the bottom of the installation guide is both functional and elegant, keeping the 960 Pro safely in place during shipment.
Introduction, Specifications and Packaging
Just under a year ago we published our review of the Samsung 950 PRO, their first foray into NVMe SSD territory. Today we have a 960 PRO, which strives to be more revolutionary than evolutionary. There are some neat new features like 16-die packages and a Package-on-Package controller/DRAM design, all cooled by a copper heat spreading label! This new model promises to achieve some very impressive results, so without further delay, let's get to it!
Specs have not changed since the announcement. Highlights include
- A new 5-core Polaris controller (with one die solely dedicated to coordinating IO's to/from the host)
- 4-Landing Design - It's tough fitting four flash packages onto an M.2 2280 SSD, but Samsung has done it, thanks to the below feature.
- Package-on-Package - The controller and DRAM are stacked within the same package, saving space.
- Hexadecimal Die Packages - For the 960 Pro to reach 2TB of capacity, 16 48-layer MLC V-NAND packages must be present within each package. That's a lot of dies per package!
Nice touch with the felt pad on the bottom of the installation guide. This pad keeps the 960 Pro safely in place during shipment.
Introduction and Packaging
The Drobo 5D launched a few years ago and continues to be a pricey solution, running close to $600. This was due to added complexity with its mSATA hot data cache and other features that drove the price higher than some potential buyers were happy with. Sure the cache was nice, but many photographers and videographers edit their content on a faster internal SSD and only shift their media to their external storage in bulk sequential file copies. These users don’t necessarily need a caching tier built into their mass storage device - as they just want good straight-line speed to offload their data as fast as possible.
With new management and a renewed purpose with a focus on getting lower cost yet performant products out there, Drobo relaunched their base 4-bay product in a third-generation form. We tested that unit back in December of 2014, and its performance was outstanding for a unit that typically runs in the mid-$200 price range. The price and performance were great, but things were a bit tight when trying to use Dual Disk Redundancy while limited to only four installed drives. A fifth bay would have certainly been handy, as would USB-C connectivity, which brings me to the subject of today’s review:
I present to you the Drobo 5C. Essentially a 5-bay replacement to the 4-bay 3rd gen Drobo. This will become the new base model Drobo, meaning there will no longer be any 4-bay models in Drobo's product lineup:
Introduction, Specifications, and Packaging
It's been quite some time since we saw a true client SSD come out of Intel. The last client product to use their legendary 10-channel controller was the SSD 320 (launched in 2011), and even that product had its foot in the enterprise door as it was rated for both client and enterprise usage. The products that followed began life as enterprise parts and were later reworked for consumer usage. The big examples here are the SATA-based SSD 730 (which began life as the SSD DC S3500/3700), and the PCI/NVMe-based SSD 750 (which was born from the SSD DC P3700). The enterprise hardware had little support for reduced power states, which led Intel to market the 730 as a desktop enthusiast part. The 750 had a great NVMe controller, but the 18-channel design and high idle power draw meant no chance for an M.2 form factor version of the same. With the recent addition of low-cost 3D NAND to their production lines, Intel has now made began another push into the consumer space. Their main client SSD of their new line is the 600p, which we will be taking a look at today:
Introduction, Specifications and Packaging
Intel launched their Datacenter 'P' Series parts a little over two years ago. Since then, the P3500, P3600, and P3700 lines have seen various expansions and spinoffs. The most recent to date was the P3608, which packed two full P3600's into a single HHHL form factor. With Intel 3D XPoint / Optane parts lurking just around the corner, I had assumed there would be no further branches of the P3xxx line, but Intel had other things in mind. IMFT 3D NAND offers greater die capacities at a reduced cost/GB, apparently even in MLC form, and Intel has infused this flash into their new P3520:
Remember the P3500 series was Intel's lowest end of the P line, and as far as performance goes, the P3520 actually takes a further step back. The play here is to get the proven quality control and reliability of Intel's datacenter parts into a lower cost product. While the P3500 launched at $1.50/GB, the P3520 pushes that cost down *well* below $1/GB for a 2TB HHHL or U.2 SSD.
Introduction and Specifications
Barracuda is a name we have not heard in a good while from Seagate. Last seen on their 3TB desktop drive, it appears they thought it was time for a comeback. The company is revamping their product lines, along with launching a full round of 10TB Helium-filled offerings that cover just about anything you might need:
Starting from the center, IronWolf is their NAS drive, optimized for arrays as large as 8 disks. To the right is their surveillance drive offering, the SkyHawk. These are essentially NAS units with custom firmware optimized for multiple stream recording. Not mentioned above is the FireCuda, which is a rebrand of their Desktop SSHD. Those are not He-filled (yet) as their max capacity is not high enough to warrant it. We will be looking at those first two models in future pieces, but the subject of today’s review is the BarraCuda line. The base 3.5” BarraCuda line only goes to 4TB, but the BarraCuda Pro expands upon those capacities, including 6TB, 8TB, and 10TB models. The subject of today’s review is the 10TB BarraCuda Pro.
Introduction, Packaging, and Internals
Being a bit of a storage nut, I have run into my share of failed and/or corrupted hard drives over the years. I have therefore used many different data recovery tools to try to get that data back when needed. Thankfully, I now employ a backup strategy that should minimize the need for such a tool, but there will always be instances of fresh data on a drive that went down before a recent backup took place or a neighbor or friend that did not have a backup.
I’ve got a few data recovery pieces in the cooker, but this one will be focusing on ‘physical data recovery’ from drives with physically damaged or degraded sectors and/or heads. I’m not talking about so-called ‘logical data recovery’, where the drive is physically fine but has suffered some corruption that makes the data inaccessible by normal means (undelete programs also fall into this category). There are plenty of ‘hard drive recovery’ apps out there, and most if not all of them claim seemingly miraculous results on your physically failing hard drive. While there are absolutely success stories out there (most plastered all over testimonial pages at those respective sites), one must take those with an appropriate grain of salt. Someone who just got their data back with a <$100 program is going to be very vocal about it, while those who had their drive permanently fail during the process are likely to go cry quietly in a corner while saving up for a clean-room capable service to repair their drive and attempt to get their stuff back. I'll focus more on the exact issues with using software tools for hardware problems later in this article, but for now, surely there has to be some way to attempt these first few steps of data recovery without resorting to software tools that can potentially cause more damage?
Well now there is. Enter the RapidSpar, made by DeepSpar, who hope this little box can bridge the gap between dedicated data recovery operations and home users risking software-based hardware recoveries. DeepSpar is best known for making advanced tools used by big data recovery operations, so they know a thing or two about this stuff. I could go on and on here, but I’m going to save that for after the intro page. For now let’s get into what comes in the box.
Note: In this video, I read the MFT prior to performing RapidNebula Analysis. It's optimal to reverse those steps. More on that later in this article.