Review Index:
Feedback

Building Our Kick-Ass Plex Server With AMD Ryzen Threadripper 1950X

Author: Jim Tanous
Subject: General Tech
Manufacturer:

Storage, Performance, and Conclusion

Storage

Since we're not using an OS like FreeNAS, and since we don't want to use something like Windows Storage Spaces, we elected to use a hardware RAID card to build our primary storage array. We went with Allyn's favorite RAID company, Areca, and picked up a used 16-port ARC-1261 on eBay for only about $150. We upgraded its onboard RAM to the maximum of 2GB.

View Full Size

We got lucky with the timing of our build. Just as we were planning out and budgeting for storage, the Western Digital easystore line of external drives went on its first big sale. These 8TB drives, available exclusively from Best Buy, carry a retail price of about $300 each, but we picked up ours during that first sale for about $190 each, a huge discount (and they've since fallen to as low as $150 in subsequent sales -- don't pay retail price for these drives!).

View Full Size

The key is that these external hard drives contained a standard Western Digital Red drive inside (at least, they did at the time; more recent models now contain white label drives that share the Red's performance characteristics but may have some compatibility issues with certain devices due to the power pin layout). So we canvased all of the Best Buy stores in our area, shucked 'em all, and ended up with a nice big stack of 8TB Reds.

View Full Size

To hold all of our drives and hardware, we chose the iStarUSA M-4160-ATX, a 4U rackmount chassis with room for 16 3.5-inch drives. One nice feature of this chassis is that it uses SFF-8087 miniSAS connectors for the storage backplane, the same used by our Areca RAID card. That meant only four data cables were needed to connect all 16 drives, helping keep our case neat and tidy.

View Full Size

With our drives installed, we used the Areca management interface to configure all of the drives into a single RAID 6 array with a raw capacity of 112TB. From there, we used Windows sharing and permissions settings to separate access to the Plex data and our PCPer files. And yes, we know, RAID is not backup. So we also have a Synology NAS onsite for local nightly backups (using the handy app Bvckup 2), as well as a Backblaze account for a second cloud-based backup.

Performance

Our primary storage array isn't going to be the absolute best performer due to its use of slower WD Red drives and an older RAID card, but it's still more than adequate for our needs. In terms of sequential transfers, we can achieve average speeds of about 640MB/s reads and 720MB/s writes. That's both locally on the server as well as for large sequential transfers via the 10GbE network.

Such speeds are overkill for Plex alone, but they make accessing our PCPer video and data files a much more pleasant experience when combined with the faster network.

As for our processor, we couldn't be happier with the Threadripper 1950X. When it comes to measuring streaming performance capability, Plex gives general guidance of about a 2000 PassMark score per simultaneous 1080p stream. The actual requirements will of course vary based on the complexity of the specific media file and your transcoding settings, but the "2000 per stream" rule is a good place to start.

View Full Size

We ran the PassMark Performance Test on our completed build and received a CPU score of 23,602 at stock frequencies. According to the PassMark database, the average score for the 1950X is 21,941, so we're sitting quite pretty.

Based on Plex's guidance, our score means that we should be able to handle at least 10 simultaneous 1080p transcodes, and that's really pushing it since at least some of our Plex clients will be direct streaming or direct playing media with little hit on the CPU. In short, when your Plex server is powered by a Threadripper 1950X, the limiting factor quickly becomes your Internet bandwidth, or even the speed of your storage array, rather than your CPU's transcoding horsepower.

Why Not a Used Xeon Server?

The recent trend among Plex enthusiasts is to pick up used Xeon-based servers in lieu of building a custom system from scratch. As companies both large and small upgrade to newer hardware, it's not uncommon for them to try and recoup some of their costs by selling their old hardware. The servers on the market now are generally V1 and V2-era E5 Xeons, and many of these servers, including dual-processor models, can be had for well less than the cost of a Threadripper 1950X alone.

So why not go this route? While the used server approach is a great option, it has a few drawbacks that we wanted to address. First, these Xeon processors are now several generations old and obviously don't offer the same level of performance as their modern counterparts. For example, here's a server listed for $700 that includes two first generation 8-core Xeon E5-2660 processors. With a PassMark score of 11,107 each, they fall just short of our 1950X's score, and that's assuming perfect scaling between the processors which isn't possible in most workloads.

Another issue is reliability. Enterprise-class components are certainly built for reliability, but the used processors and systems being sold have already given a "lifetime" of service. When buying these parts and systems, you likely won't know what types of workloads they were given, how adequately they were cooled or maintained, or any other issues that could affect their performance and longevity. While new components aren't immune to technical issues, we don't have to worry about any abusive past they may have experienced, and we have the protection of a manufacturer's warranty that will at least get us through the first few years.

Finally, there's the issue of noise. These used servers were tuned for cooling performance and intended for dedicated datacenters; there wasn't an issue in their former life if they sounded like jet engines. The same probably isn't true of your home or small office Plex server. In our new custom built server, the Noctua cooler runs at just over 20dB, and we opted for Noctua case fans for the drive array as well. It's certainly possible to modify the used servers for quieter operation, but in our case our server was optimally designed from the start.

So, in summary, there's absolutely nothing at all wrong with pursuing the used server route for your own Plex Server build, but we found the benefits of a single, modern, faster processor to best suited for our needs.

Quirks and Conclusion

Our new server has now been running for several months and has proven itself to be a fantastic upgrade, both in terms of productivity as well as entertainment. But it's still not perfect, and there are some changes and upgrades that we may consider in the future.

First, while Windows 10 Pro meets our needs, it's not the ideal operating system for this type of server. Those Windows Updates that we thought we could handle continue to be a pain, with the recent Fall Creators Update causing a significant amount of frustration when it unexpectedly broke compatibility with some of our apps and workflows. A better solution, and one that we just didn't have time to deal with initially, would be to use a storage-focused OS like FreeNAS or Linux Server, and then virtualize any other operating systems we may need. There are no immediate plans to take that step, but it's something we know we'll need to take care of sooner or later.

Our storage performance could also be better. Newer RAID cards, faster drives, solid state caching, and the latest PCIe 3.0 10GbE NICs could all help in this regard. For comparison, we're currently testing a QNAP NAS that, when populated with 7200rpm datacenter drives, can max out our 10GbE network with real world speeds of over 1GB/s.

View Full Size

In all other respects, however, our new Ryzen Threadripper-based server performs like a champ and has significantly improved things here at the office when compared to our old, slower server confined to a gigabit network. We've learned that even though AMD's Threadripper line is primarily targeted at high end workstations, it can make a heck of a server platform for small offices like ours.


January 24, 2018 | 02:23 PM - Posted by mouf

Obviously not wanting to divulge too much information about your internal workings but based on the story it sounds like the server is used for file storage and Plex. I don't understand why FreeNas or something similar wasn't used based on the description given in the story.

BUT also FreeNAS doesn't have great support for AMD Ryzen/Thread Ripper at this time to my knowledge.

January 24, 2018 | 03:36 PM - Posted by Ken Addison

Honestly, I spent several weeks trying a lot of operating systems and solutions including FreeNAS and some other Debian-based server environments (OpenMediaVault). The answer is we live in a Windows environment here, and all things said and done, Windows to Windows filesharing was the easiest and most compatible.

I'm sure we could have configured SMB on FreeNAS to work perfectly, but at some point we had to go ahead and implement the server instead of spending more time on it. 

I was very impressed with the time I spent with FreeNAS and OpenMediaVault (shout out if you've never taken a look at it, it's really neat!). However, when the time came, Windows was the best solution for us at the time.

January 25, 2018 | 12:31 AM - Posted by OnimeNoGarou (not verified)

UnRAID would have been perfect for you guys. You would have been able to use built in docker for Plex and use the built in virtualization system for running Windows. Using the features available to Intel and AMD systems, you could have assigned 4 (or more) cores just to Windows and the rest for everything else.

January 25, 2018 | 07:56 AM - Posted by quest4glory

No, they said they wanted native performance at certain times. Meaning not only bare metal performance, but also access to all cores. It’s in the article.

Also in the article is a comment that they don’t have time to deal with it now, but know they will need to eventually.

January 24, 2018 | 03:44 PM - Posted by Lexrst (not verified)

The Plex jail vm for FreeNAS is community maintained and runs far behind the Windows build in terms of features. I ran it for awhile and finally abandoned it for the Windows platform. Sad, but if Plex is your killer app on your storage system, that's the way it goes.

January 26, 2018 | 09:13 AM - Posted by Bri (not verified)

You have to update Plex in the jail yourself. There's an old forum post out there on either the Plex or FreenAS forum that will walk you through it. It's a pain, but takes 1 minute.

January 25, 2018 | 06:58 PM - Posted by ALQUIZA, Erwin Lazo (not verified)

Use raw freeBSD instead, and make a full-blown FreeNAS or TrueNAS. i build several boxes already for the past 6mos. They all are performing at PAR.

January 24, 2018 | 02:58 PM - Posted by Paul A. Mitchell (not verified)

FYI:
http://www.icydock.com/goods.php?id=255
+
http://highpoint-tech.com/USA_new/series-ssd7120-overview.htm
+
https://www.newegg.com/Product/Product.aspx?Item=N82E16817801139
(Allyn has one of the latter)

January 24, 2018 | 03:30 PM - Posted by Raghunathan (not verified)

Nice to see ThreadRipper being able to perform well as a server platform.

January 24, 2018 | 03:59 PM - Posted by Randy (not verified)

Was there any thought of doing an Epyc 7401P based system? Or did it more matter that you likely had most of the parts for Threadripper available from previous reviews and such?

January 24, 2018 | 04:32 PM - Posted by BitCoinzWhateverCoinz (not verified)

Epyc is the better platform for workstation/server usage and those 128 PCIe lanes and 8 memory channels per socket. So the dual socket Epyc SP3 MBs with 16 memory channels(8 per socket) systems with 2 EPYC 7251 8 core CPUs at around $500 each may be great if total memory bandwidth is needed for decoding workloads.

Threadripper MBs are not Certified/Vetted for ECC memory like the Epyc MBs are. The more memory slots the more memory population options there are for the user to make use of lots of lower cost low capacity DIMMs as opposed to fewer of the higher cost high capacity DIMMs. A dual socket Epyc MB and 16 memory channels is currently less costly than some of the few single socket Epyc/SP3 MB options if the user is not looking for much in the way of lots of PCIe x16 slots like that Gigabyte single socket Epyc/SP3 MB offers for around $610-$650.

The Epyc MBs/CPU SKUs also get the 3+ year warranties and other features not available on the consumer MBs.

January 24, 2018 | 09:23 PM - Posted by quest4glory

This is getting old.

January 24, 2018 | 10:36 PM - Posted by RealProHardwareNotConsumerTAT (not verified)

It never gets old that with that consumer crap touted as Workstation/Server grade and the Gaming morons who eat that crap up. AMD's Epyc SKUs are a better feature for feature deal than any consumer Gaming Moron tat! This is not Intel's overpriced Meltdown affected offerings as Epyc is pleny affordable relative to any of AMD's consumer branded stuff.

No one give a rats A$$ about overclocking and gaming where workstation/servers are concerned. Epyc represents a better value than any Consumer Threadripper CPUs/MBs and the little Gaming Gits concerns will matter less and less for both AMD and Nvidia as far as GPUs are concerned.

Take Your Threadripper and its "ECC" compatability and limited MB features compared to Epyc/SP3 and get the Fudge Out of here. You can not fool anyone with that consumer "workstation" nonsense anymore.

Stupid gamers how's that GPU availability going for your little gaming usage now that GPUs have more uses than only gaming. Epyc kicks Threadripper's A$$ for workstation/server value any time and any place.

Vega is not here even now for gaming in any numbers but Vega 10 is sure inside those Radeon Pro WX 9100s and Radeon Instinct MI25 along with the real workstation Epyc/SP3 MB SKUs with real grownup ECC certification/vetting and real warranties and long term firmware/driver support.

January 26, 2018 | 04:06 AM - Posted by Jim Tanous

Availability of components was a major factor in the decision making. We started this build months ago and barely had any choice in motherboard for Threadripper, let alone Epyc. For a true enterprise-class situation, Epyc would have been worth waiting for. In our case, while we have more going on than most home/smb setups, we didn't necessarily need that level of performance or enterprise/server feature set.

Although man, that 7401P offers a heck of a lot of performance for its price point, at least for heavily multithreaded workloads.

January 24, 2018 | 04:07 PM - Posted by qwer38456

Is there something inherently "wrong" with Windows Storage Spaces? I'm running it now on my HTPC about 6Tb of media. I don't have any feelings for or against it, but would love to know if I should migrate off it in my next build.

January 24, 2018 | 04:20 PM - Posted by joelgarnick (not verified)

Technically, win 10 storage spaces works well, I built a storage/Plex server on it a couple years ago...what convinced me to move to something else was the increasing frequency of Microsoft's aggressive updates strategy that was frequently rebooting my server when I didn't want it to reboot.

January 24, 2018 | 04:09 PM - Posted by hoxlund

Gives me an idea of what to do with my 1950x threadripper system when it becomes obsolete in 10 years :)

https://pcpartpicker.com/list/C6Tryf

January 24, 2018 | 09:12 PM - Posted by Mnemonicman

I had the same idea but for my 5960X system in probably much less time.

January 24, 2018 | 06:46 PM - Posted by SkOrPn

Isn't Plex working on a server-less system already, so the need for Plex Servers is redundant someday soon? I hope so.

January 26, 2018 | 04:17 AM - Posted by Jim Tanous

Well, the Plex Server application has long been available on some NAS devices, as well as the NVIDIA SHIELD. For most users, either of those options are likely just fine. The primary factor is the device's ability to transcode media when a Plex client isn't able to directly play it. In the case of the SHIELD and high-end NAS devices, they can handle one or two simultaneous transcoding sessions without issue.

On the lower-end NAS devices, or in the case of high resolution HEVC source files, you may not be able to transcode to Plex clients. In that case, you could "optimize" your media files, either manually with Handbrake, FFmpeg, etc. or by using Plex's built-in optimize feature. What this does is create a version of your file that can be directly played on your Plex clients (iPad, Fire TV, etc.), allowing your NAS device to just stream the original file without needing to transcode it.

The downsides of this approach are that it takes up more storage space for the new "optimized" file (unless you choose to delete the original file after optimizing) and the time it takes to re-encode your media library into a direct-playable format, which could be days or weeks depending on the number of files and speed of your computer or NAS processor.

So, in our case -- and likely for the foreseeable future -- there's definitely still a good use case for dedicated Plex servers. But for many users, the ability to use a single small NAS or media device as a Plex server is already available.

January 24, 2018 | 06:50 PM - Posted by Audio-Listener (not verified)

Why Windows for a server? I know it's harder to take other options but Linux with ZFS (Ubuntu for example) works great with Plex and Samba is so trivial to setup. You can even easily separate your workloads with KVM. I work in enterprise and we avoid windows like the plague; too unreliable, hard to maintain and insecure.
Just feel that given the technical depth your team is renowned for you would absolutely have been able to get better results by avoiding Windows.

January 25, 2018 | 04:51 AM - Posted by dragosmp (not verified)

Windows is good, just not the client version. Get a Server version and you can easily tell it when to install updates and how/when not to reboot unless you explicitly allow it. It does cost a bit more than a client license, but this is something that is supposed to be reliable and last a long time. You can always use it on the "next" server due to MS' very long support.

January 24, 2018 | 09:23 PM - Posted by quest4glory

The headline on the front page reads "32 cores"...

January 26, 2018 | 03:54 AM - Posted by Jim Tanous

Thanks for catching that -- fixed!

January 25, 2018 | 05:29 AM - Posted by Justin Stephenson (not verified)

A nice system, very effective no matter what others say about consumer grade stuff in an enterprise work environment.

My home NAS is not powerful enough to run as a plex server (it is old), an upgrade would be good but I think your system might be overkill for me!

Now I will go back to the workarounds to deal with W10 breaking (sorry it is an upgraded security feature!) networking with the NAS (it does not turn up under network in file explorer - not a big issue but still a pain)

January 25, 2018 | 10:14 AM - Posted by None Given at this time (not verified)

One suggestion: You're not mounting the hardware in the rack correctly. Rack mount cases are designed to line up with the "U" markers on the rack. At no point should a case be mounted where it is straddling part of the next section.

In your picture above, you have a 4U case that is literally taking 5U of space because it wasn't properly mounted. Also, the mount points are off because the mounting holes are not uniform on the rack. This is why you can't use 4 screws to attach this.

To remedy this, unmount the case, line up the clips on the rack for the screws to match where they are on the ears for the case (or line up the rails...tough to tell from the picture), and have the case go from the bottom of 20 to the top of 23 OR the bottom of 21 to the top of 24.

Nice article. Poor finish!

January 26, 2018 | 03:58 AM - Posted by Jim Tanous

Indeed. In fact, it's not really mounted at all in the article's picture, just sitting on some makeshift rails while we move stuff around. As Ryan mentioned while discussing this article during the most recent podcast, our "server room" is kind of a mess right now, so everything is just thrown together for the time being. I'll be sure to reference your comment when we go to permanently mount the server and our other equipment. Thanks!

January 25, 2018 | 11:39 AM - Posted by DiaperDanDoodied

Have a look @ sichbopvr. I'm slowly migrating away from kodi pvr and media center 10 hack (as they forced 1709 update, broke dvr in MCE).

Does plex allow you to view your media remotely or you using a vpn?

January 25, 2018 | 12:08 PM - Posted by WindowsMindOfItsOwnDeletingYourContent (not verified)

I would not use windows 10 for any media server as windows 10 may just decide to remove some of your saved content if it devides that that content violates some DRM(Digital rights management). So whatever Linux/Open Source without all that spying baked in is probably going to be the best solution.

Wait until 2020 to see how Windows 10 will truely be with even more forcing and even less control over your hardware!

January 26, 2018 | 04:24 AM - Posted by Jim Tanous

Yes, one of the key features of Plex is that it allows remote access for both the primary account and any "friend" accounts with whom your media is shared. I believe that currently the only limitation is Live TV. Specifically, if you have Live TV set up and enabled, the primary account can view that stream remotely, but not any shared accounts. However, shared users have full access to live TV programs that have been recorded via Plex DVR.

January 25, 2018 | 03:03 PM - Posted by C.W. Olson (not verified)

How did you guys fix the forced Win 10 updates and reboots ? Sorry if this has been asked already ( and if so then mention it and then I will look through all the comments to find it ) as i did scroll down some but did not see it yet ! Since this was built have you done any hardware updates and run into any problems ?

January 26, 2018 | 04:31 AM - Posted by Jim Tanous

How did you guys fix the forced Win 10 updates and reboots?

We haven't. There are various unofficial methods to prevent automatic updates in Windows 10 (and some official methods to at least temporarily delay the updates), but we haven't yet implemented those. Part of the reason is we know we need to move to another platform eventually. The other reason is that our workloads don't require 100% uptime, so we've grown accustomed to performing regular updates/maintenance/reboots during downtimes to minimize the chance of things going down during a critical moment.

Since this was built have you done any hardware updates and run into any problems?

We haven't made any hardware changes to the server since it was finalized, although we've been playing around with other equipment in an attempt to optimize our 10Gb network. Those changes haven't affected the server. However, as mentioned in the article, the Windows 10 Fall Creators Update caused a number of software issues that took some time to address, including issues with VMware Workstation, our backup software, and some network sharing permissions (the weirdest of which involved the FCU changing our network from Private to Public, which then locked all of our shared apps and services behind the Windows firewall).

January 25, 2018 | 04:03 PM - Posted by Vincent Repole (not verified)

Why not just disable Windows updates? I haven't updated since some time last year

January 25, 2018 | 07:22 PM - Posted by Jailer (not verified)

Let me know how that 16 drive array works out when the first drive fails and you have to rebuild with only 2 drives redundancy.

January 26, 2018 | 04:34 AM - Posted by Jim Tanous

Fair. But that's why we have multiple backups of this data.

January 25, 2018 | 07:31 PM - Posted by Jailer (not verified)

Also was esxi not an option?

January 26, 2018 | 04:35 AM - Posted by Jim Tanous

Honestly, ESXi is where I think we'll ultimately end up.

January 28, 2018 | 07:33 AM - Posted by Gustation (not verified)

How many transcoded streams versus the educated guess?
Check out Byte My Bits on youtube for some good
Plex info.

January 31, 2018 | 01:47 AM - Posted by JayK (not verified)

Maybe you can reach out to Gigabyte and ask them, why there's still no sign of any BIOS Update with AGESA 1.0.0.4 or newer for months now. Just silence...

February 12, 2018 | 07:20 PM - Posted by omegatotal2 (not verified)

I am running a dell r610 ( software testing ) and a dell r620 ( vmware esx 6 standard ) and both can be nearly silent even in a 1u form factor with dual psu's with between 12 and 20cores plus HT.

Both use around 100-110 watts at idle from the wall ( measuring both PSU's ), when configured with mid range 95watt tdp cpu's

The psu's are cheap on the open market, can be found new, easy to find, and generally dont fail as they are usually Delta built units.

~~~~~~~~~~~~~~~~
Best idea for the speed you want with some more versatility:

Setup your home brew storage server as an iscsi target on whatever OS you want with a 4 or 8 core cpu and lots of fast ram setup is cache.

Use the SSD's for an additional cache tier in, boot from USB drives.

Use 10+GBe or FC (iscsi offloading support is better) nics(FC for lower latency) in the NAS box and the same nic in the r610/r620 and run vmware esx on the r610/r620 and host whatever you want (run smaller ssd's locally for the high iops vm's and setup iscsi datastore for the lower iops vm's, boot from the internal SDcard or USB thumb drive (get these sd cards from dell for best reliability)

Pass the FC cards and/or 10gbe nics through to the guest vm to lower latency.

or get the similar config 1u HP servers

Then you can use direct attach cables in the FC nics if they are SFP or SFP+ so you dont need fiber and SFP's unless you want to also get a FC Switch and run multiple servers connecting to the same NAS/storage array.

Not that much more costly but potentially way more versatile.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.