Synology DS1019+ Review
Synology this week is launching the DS1019+, a 5-bay counterpart to last year's 4-bay DS918+. Like most of the company's "Plus" series devices, it is aimed at higher-end home users and small businesses with a price (without drives) of $649.99.
Synology loaned us a review unit of the DS1019+ prior to launch, and after adding it to our growing shelf of network storage devices, we spent some time seeing how this new model compares to its predecessors and counterparts.
Specifications & Design
The design of the DS1019+ is virtually identical to that of the DS918+, with the same style of drive bays, same case material and color, same basic layout of ports and status lights, and even an almost identical list of technical specs. The biggest difference between the two by far is simply the addition of a fifth drive bay on the DS1019+. So, if you liked the look and feel of the DS918+, you should feel the same way about the DS1019+.
Following the design trends of other Synology NAS devices in recent years, the DS1019+ is compact considering its capabilities. It measures in at 166mm x 230mm x 223mm (about 6.5 x 9.0 x 8.8 inches) and weighs about 5.6 pounds without drives. Included in the box is the power adapter with region-appropriate power cord, two five-foot Cat5e Ethernet cables, an accessory kit with two keys for the drive bay locks, 20 screws for mounting 2.5-inch drives in the 3.5-inch drive bays, and a quick installation guide.
Like almost all Synology NAS devices, the DS1019+ ships without drives, so you'll need to add your own mechanical or solid state drives in order to use the device. If want to configure the NAS with a traditional RAID, you'll want to populate the drive bays with drives of the same capacity and ideally from the same vendor. If you need to mix-and-match drive vendors, at least aim to use drives with identical performance specifications. Similar in concept to Drobo, Synology also offers a "Hybrid RAID" (SHR) option that allows users to combine drives of different sizes or later expand the array by replacing smaller drives with larger ones. Depending on drive types and size mismatches, however, there is a performance penalty to going this route compared to a similar RAID configuration utilizing identical disks.
As alluded to, the 1019+ is powered by the same CPU found in the DS918+: the Intel Celeron J3455, a quad-core 10-watt Apollo Lake part. With base and boost clocks of 1.5GHz and 2.3GHz, respectively, the J3455 is more than powerful enough to accommodate the transfer and management of data on the NAS, and it also supports hardware video transcoding, which is a huge advantage for services like Plex.
I Have a Need, a Need for Download Speed
Thanks to Wendell from Level1Techs for his all of his help on this project and pointing us in the right direction!
A few years ago, we were fortunate enough to get a fiber internet connection installed at the PC Perspective office. Capable of 1Gbps download speeds and about 250Mbps upload, we were excited at the possibilities that laid ahead.
However, when you have access to a very fast internet connection, you begin to notice that the bottleneck has shifted from your connection to the servers on the other side of the content delivery networks (CDNs) that power the internet. While these CDNs have very fast links to the internet, they generally limit bandwidth so that there is more speed to go around to multiple people at the same time.
DL from Steam at 101 MB/s! So fast that the SSD is having trouble keeping up. Able to grab 30+ GB in under 5 mins. pic.twitter.com/ptHUyQVHJr
— Ryan Shrout (@ryanshrout) September 3, 2014
A look back at what once was
One of the services that we found would max out our connection was Steam. Since we download a lot of PC games at the office, it was a nice benefit to have an internet connection as fast as our NICs could handle, and that the Steam CDNs would serve us at our maximum potential. In fact, the bottleneck shifted over to storage performance, as the random writing nature of Steam thrashed our SSDs at the time.
By no stretch of the imagination is 60MB/s slow.. but what happened to our 100MB/s!
Unfortunately, this has ceased to remain the case. At some point, Steam downloads started getting slower on our same internet connection. Not only did storage utilization during a Steam download start to increase, but also CPU usage, pointing to a potential change in how Steam distributed their data. While downloads on our high-end systems fell to around 50-60MB/s, systems with less CPU horsepower started to see speeds fall to 20-30MB/s. All hope was lost for fast game downloads.. or was it?
Recently, Wendell from Level1Techs mentioned on Twitter that they were running a local Steam caching server on their network with great success. After some guidance from Wendell, we decided to tackle this project and see if it would help our specific scenario.
Subject: Storage | July 18, 2017 - 07:31 PM | Jeremy Hellstrom
Tagged: XPoint, srt, rst, Optane Memory, Optane, Intel, hybrid, CrossPoint, cache, 32GB, 16GB
It has been a few months since Al looked at Intel's Optane and its impressive performance and price. This is why it seems appropriate to revist the 2280 M.2 stick with a PCIe 3.0 x2 interface. It is not just the performance which is interesting but the technology behind Optane and the limitations. For anyone looking to utilize Optane is is worth reminding you of the compatibility limitations Intel requires, only Kaby Lake processors with Core i7, i5 or i3 heritage. If you do qualify already or are planning a system build, you can revisit the performance numbers over at Kitguru.
"Optane is Intel’s brand name for their 3D XPoint memory technology. The first Optane product to break cover was the Optane PC P4800X, a very high-performance SSD aimed at the Enterprise segment. Now we have the second product using the technology, this time aimed at the consumer market segment – the Intel Optane Memory module."
Here are some more Memory articles from around the web:
- G.SKILL TridentZ RGB 3600 MHz C16 DDR4 @ techPowerUp
- GSKill Trident Z 4133Mhz RGB CL19 DDR4 Dual Channel Memory Review @ Hardware Asylum
- Ballistix Elite 3466 MHz DDR4 @ techPowerUp
Subject: Storage | April 24, 2017 - 05:20 PM | Jeremy Hellstrom
Tagged: XPoint, srt, rst, Optane Memory, Optane, Intel, hybrid, CrossPoint, cache, 32GB, 16GB
At $44 for 16GB or $77 for a 32GB module Intel's Optane memory will cost you less in total for an M.2 SSD, though a significantly higher price per gigabyte. The catch is that you need to have a Kaby Lake Core system to be able to utilize Optane, which means you are unlikely to be using a HDD. Al's test show that Optane will also benefit a system using an SSD, reducing latency noticeably although not as significantly as with a HDD.
The Tech Report tested it differently, by sourcing a brand new desktop system with Kaby Lake Core APU that did not ship with an SSD. Once installed, the Optane drive enabled the system to outpace an affordable 480GB SSD in some scenarios; very impressive for a HDD. They also did peek at the difference Optane makes when paired with aforementioned affordable SSD in their full review.
"Intel's Optane Memory tech purports to offer most of the responsiveness of an SSD to systems whose primary storage device is a good old hard drive. We put a 32GB stick of Optane Memory to the test to see whether it lives up to Intel's claims."
Here are some more Storage reviews from around the web:
- Intel Optane Memory Review - 1.4GB/s Speed & 300K IOPS for $44 @ The SSD Review
- The Intel Optane Memory Module Review @ Hardware Canucks
- Kingston DCP1000 NVMe SSD Reaches 7GB/s @ Kitguru
- WD Blue 1,000 GiB SSD @ Hardware Secrets
- Synology DiskStation DS916+ 4-Bay NAS @ Kitguru
- Drobo 5N2 NAS @ Kitguru
- Kingston Ultimate GT 2TB Flash Drive @ The SSD Review
- Toshiba X300 6TB HDD @ Kitguru
Introduction, Specifications, and Requirements
Finally! Optane Memory sitting in our lab! Sure, it’s not the mighty P4800X we remotely tested over the past month, but this is right here, sitting on my desk. It’s shipping, too, meaning it could be sitting on your desk (or more importantly, in your PC) in just a matter of days.
The big deal about Optane is that it uses XPoint Memory, which has fast-as-lightning (faster, actually) response times of less than 10 microseconds. Compare this to the fastest modern NAND flash at ~90 microseconds, and the differences are going to add up fast. What’s wonderful about these response times is that they still hold true even when scaling an Optane product all the way down to just one or two dies of storage capacity. When you consider that managing fewer dies means less work for the controller, we can see latencies fall even further in some cases (as we will see later).
Subject: Storage | March 27, 2017 - 12:16 PM | Allyn Malventano
Tagged: XPoint, Optane Memory, Optane, M.2, Intel, cache, 3D XPoint
We are just about to hit two years since Intel and Micron jointly launched 3D XPoint, and there have certainly been a lot of stories about it since. Intel officially launched the P4800X last week, and this week they are officially launching Optane Memory. The base level information about Optane Memory is mostly unchanged, however, we do have a slide deck we are allowed to pick from to point out some of the things we can look forward to once the new tech starts hitting devices you can own.
Alright, so this is Optane Memory in a nutshell. Put some XPoint memory on an M.2 form factor device, leverage Intel's SRT caching tech, and you get a 16GB or 32GB cache laid over your system's primary HDD.
To help explain what good Optane can do for typical desktop workloads, first we need to dig into Queue Depths a bit. Above are some examples of the typical QD various desktop applications run at. This data is from direct IO trace captures of systems in actual use. Now that we've established that the majority of desktop workloads operate at very low Queue Depths (<= 4), lets see where Optane performance falls relative to other storage technologies:
There's a bit to digest in this chart, but let me walk you through it. The ranges tapering off show the percentage of IOs falling at the various Queue Depths, while the green, red, and orange lines ramping up to higher IOPS (right axis) show relative SSD performance at those same Queue Depths. The key to Optane's performance benefit here is that it can ramp up to full performance at very low QD's, while the other NAND-based parts require significantly higher parallel requests to achieve full rated performance. This is what will ultimately lead to a much snappier responsiveness for, well, just about anything hitting the storage. Fun fact - there is actually a HDD on that chart. It's the yellow line that you might have mistook as the horizontal axis :).
As you can see, we have a few integrators on board already. Official support requires a 270 series motherboard and Kaby Lake CPU, but it is possible that motherboard makers could backport the required NVMe v1.1 and Intel RST 15.5 requirements into older systems.
For those curious, if caching is the only way power users will be able to go with Optane, that's not the case. Atop that pyramid there sits an 'Intel Optane SSD', which should basically be a consumer version of the P4800X. It is sure to be an incredibly fast SSD, but that performance will most definitely come at a price!
We should be testing Optane Memory shortly and will finally have some publishable results of this new tech as soon as we can!
Subject: General Tech | February 23, 2017 - 10:45 AM | Jeremy Hellstrom
Tagged: hbll, cache, l3 cache, Last Level Cache
There is an insidious latency gap lurking in your computer between your DRAM and your CPUs L3 cache. The size of the latency depends on your processor as not all L3 cache are created equally but regardless there are wasted CPU cycles which could be reclaimed. Piecemakers Technology, the Industrial Technology Research Institute of Taiwan and Intel are on the case, with a project to design something to fit in that niche between the CPU and DRAM. Their prototype Last Level Cache is a chip with 17ns latency which would improve the efficiency at which L3 cache could be filled to pass onto the next level in the CPU. The Register likens it to the way Intel has fit XPoint between the speed of SSDs and DRAM. It will be interesting to see how this finds its way onto the market.
"Jim Handy of Objective Analysis writes about this: "Furthermore, there's a much larger latency gap between the processor's internal Level 3 cache and the system DRAM than there is between any adjacent cache levels.""
Here is some more Tech News from around the web:
- Get this: Tech industry thinks journos are too mean. TOO MEAN?! @ The Register
- Google Releases an AI Tool For Publishers To Spot and Weed Out Toxic Comments @ Slashdot
- Nintendo Switch impressions: Out of the box and into our hands @ Ars Technica
- Galaxy S8+ specs revealed, 10nm Exynos 9 processor confirmed @ The Inquirer
- Ah, the Raspberry Pi 3. So much love. So much power ... So turn it into a Windows thin client @ The Register
Subject: Storage | February 15, 2017 - 08:58 PM | Allyn Malventano
Tagged: XPoint, ssd, Optane, memory, Intel, cache
We now have an actual Optane landing page on the Intel site that discusses the first iteration of 'Intel Optane Memory', which appears to be the 8000p Series that we covered last October and saw as an option on some upcoming Lenovo laptops. The site does not cover the upcoming enterprise parts like the 375GB P4800X, but instead, focuses on the far smaller 16GB and 32GB 'System Accelerator' M.2 modules.
Despite using only two lanes of PCIe 3.0, these modules turn in some impressive performance, but the capacities when using only one or two (16GB each) XPoint dies preclude an OS install. Instead, these will be used, presumably in combination with a newer form of Intel's Rapid Storage Technology driver, as a caching layer meant as an HDD accelerator:
While the random write performance and endurance of these parts blow any NAND-based SSD out of the water, the 2-lane bottleneck holds them back compared to high-end NVMe NAND SSDs, so we will likely see this first consumer iteration of Intel Optane Memory in OEM systems equipped with hard disks as their primary storage. A very quick 32GB caching layer should help speed things up considerably for the majority of typical buyers of these types of mobile and desktop systems, while still keeping the total cost below that for a decent capacity NAND SSD as primary storage. Hey, if you can't get every vendor to switch to pure SSD, at least you can speed up that spinning rust a bit, right?
Subject: General Tech | January 30, 2017 - 12:42 PM | Jeremy Hellstrom
Tagged: steam, cache, Nginx, ubuntu
There are tricks to managing your Steam library if you are running low on space or simply setting up something new, from tricking Steam by copying files manually or the new feature which allows you to move games from within Steam. One other possible way to manage your time and bandwidth is to build yourself a small little webserver which caches any Steam game you have downloaded locally, so you can reinstall them without using up your bandwidth. Those familiar with Riverbed appliances and the like will already be familiar with this process but many gamers may not be. Ars Technica walks you through the build and teaches a bit about caching and basic webservers along the way; check it out you are not already well versed in setting up something similar.
"But there’s an alternative to having to re-download all your Steam games from the Internet: you can set up a local Steam caching server, so that once you download something, you’ve got it on your LAN instead of having to reach for it across the net and incur usage fees."
Here is some more Tech News from around the web:
- Naughty sysadmins use dark magic to fix PCs for clueless users @ The Register
- Microsoft's Coming Windows 10 Cloud Release May Have Nothing To Do With the Cloud @ Slashdot
- Intel's Q4 was 'terrific' and 'record setting' says CEO as profits dip @ The Register
- Microsoft Reports New Subscribers For Office 365 Plunged 62% @ Slashdot
- Seagate pledges to make 16TB hard drive by 2018 @ The Inquirer
- Semi-Removed from Reality: How Windows Holographic Can Change Life As We Know It @ Hardware Secrets
- LIFX Gen3 Bulbs and LIFX Z Lightstrip SmartHome Lighting @ eTeknix
- Netis WF2375 AC600 Wireless Dual-Band Outdoor AP Router @ eTeknix
Introduction, Specifications and Packaging
Since their acquisition by Toshiba in early 2014, OCZ has gradually transitioned their line of SSD products to include parts provided by their parent company. Existing products were switched over to Toshiba flash memory, and that transition went fairly smoothly, save the recent launch of their Vector 180 (which had a couple of issues noted in our review). After that release, we waited for the next release from OCZ, hoping for something fresh, and that appears to have just happened:
OCZ sent us a round of samples for their new OCZ Trion 100 SSD. This SSD was first teased at Computex 2015. This new model would not only use Toshiba sourced flash memory, it would also displace the OCZ / Indilinx Barefoot controller with Toshiba's own. Then named 'Alishan', this is now officially called the 'Toshiba Controller TC58'. As we found out during Computex, this controller employs Toshiba's proprietary Quadruple Swing-By Code (QSBC) error correction technology:
Error correction tech gets very wordy, windy, and technical and does so very quickly, so I'll do my best to simplify things. Error correction is basically some information interleaved within the data stored on a given medium. Pretty much everything uses it in some form or another. Some Those 700MB CD-R's you used to burn could physically hold over 1GB of data, but all of that extra 'unavailable' space was error correction necessary to deal with the possible scratches and dust over time. Hard drives do the same sort of thing, with recent changes to how the data is interleaved. Early flash memory employed the same sort of simple error correction techniques initially, but advances in understanding of flash memory error modes have led to advances in flash-specific error correction techniques. More advanced algorithms require more advanced math that may not easily lend itself to hardware acceleration. Referencing the above graphic, BCH is simple to perform when needed, while LDPC is known to be more CPU (read SSD controller CPU) intensive. Toshiba's proprietary QSB tech claims to be 8x more capable of correcting errors, but what don't know what, if any, performance penalty exists on account of it.
We will revisit this topic a bit later in the review, but for now lets focus on the other things we know about the Trion 100. The easiest way to explain it is this is essentially Toshiba's answer to the Samsung EVO series of SSDs. This Toshiba flash is configured in a similar fashion, meaning the bulk of it operates in TLC mode, while a portion is segmented off and operates as a faster SLC-mode cache. Writes first go to the SLC area and are purged to TLC in the background during idle time. Continuous writes exceeding the SLC cache size will drop to the write speed of the TLC flash.