OK, now that our Synology DS1812+ and DS2413+ are old, we got a new Synology RS2413+, but the DS2413+ is going to go end of life soon. So time to think about how that replacement will go. Make no mistake just like we used the Drobo 2008 and Drobo Pro 2010 until the hardware failed (both have now), I think we can get another five years out of them hardware-wise, but they will fail. Already the DS1812+ gets no more software updates and Bay 5 is very, very cranky to insert (plus all but one of the drive levers has broken).
So what to do? Well, being all Synology is great, but might as well take this hobby and explore other systems. The two I’ve been thinking about are QNAP and FreeNAS (now TrueNAS). Both are based on ZFS which I got running with Ubuntu back in 2014 and I still regret not salvaging that server. It was a great Xeon system and was a 4U DIY Supermicro X10 NAS with Norco Enclosure that worked fine with SAS drives. It ended up in the scrap heap, but a nice machine.
So what are some choices…
iXSystems Mini R
iXSystems is the main developer of FreeNAS and they have a $1,850 (Amazon or direct) TrueNAS Mini R appliance based on the older Atom C3000 series with 12 drives. This is an older chipset but very well tuned for use and it just works out of the box. Note that there are now two versions of this Core (which is the original BSD version) and the newer Scale (which is Debian-based) with all the containerization and the ability to run applications. It uses a Supermicro A2SDi motherboard and has up to 32GB of RAM. The main thing to note is this design is five years old and the newer Supermicro A3SPI-4C with Atom C5315 Parker Ridge is already out, so as usual, you are getting a completely built solution, but see the DIY section if you want to get onto the bleeding edge đ However reviews show that the C3000 is very good and runs just fine, so this is the simplest and cheapest option. You can get 32GB memory upgrades for $175 also from Amazon later if you don’t know if you will need the 64GB of RAM.
For this file server, the appeal is that it just works and it supports ZFS and in particular, ZFS2 which is a flexible RAID6 system like the Synology Hybrid RAID. ZFS is an enterprise-level operating system and the main thing is that it likes both RAM and has two of its data structures, ZIL (the log) and the other one on SSDs.
Storage Review looked at the unit and it’s actually pretty impressive how little is in it, most of the box is just air and there is a tiny mini-ITX board at the back with the Intel Atom C3758 running at 2.2GHz with 4 cores and eight threads launched as a 14nm chip based on Denverton. The minimum RAM required is 8GB plus 1GB/drive added, so probably 32GB is a minimum. The TrueNAS SCALE applicationsar the bigger dal and it let’s you run docker containers very easily like Home Assistant. Performance was interesting with a combination of 8 hard drives and four SSDs. But basically it could handle a sustained 972/738MBps writes on large sequential, so plenty fast. Overall nice performance.
QNAP TS1273AU-RP-8G + 8G RAM
They also have two versions of their system, QTS and QuTS Hero, the older one which is ext4-based, and the newer ZFS-based one called Hero. I really would rather get the Hero as it is ZFS, but it is quite a bit more expensive.
The lines are confusing, but they have a low-end ARM line that hasn’t been updated in four years or so, a mid-range Atom-based, and then a high-end Xeon family but this sets you back $3-7K.
There are not many reviews of the ARM line, but the is a good example that runs only QTS for $1.2K. By the way, the decoder ring for QNAP names is the first digits are how many bays, in this case 11, then the second is RP for redundant power supply and the final set is how many GB of DRAM. So the shorthand for this line is the TS-x73 line. There is also the newer TS-832PXU-4G which also uses an ARM chip from Annapurna which is 8-bay and dual 10GBe for just $940 at Amazon or its cousin the TS-1232PXU-RP-4G for $1.6K at Amazon.
The entry-level x86 system, using the AMD Ryzer V1500B SOC that can run QuTS Hero is the 12-bay version with the same specs is the TS-1273AU-RP-8G which is $2.2K on Amazon with 2×2.5Gb Ethernet. This is about the same price as the RP-2423+ which is $2.4K or so particularly since you probably need 16GB if you are going to run QuTS Hero which means an additional $180 for their 8GB DDR4 2400MHz ram if you buy it from them (part RAM-8GDR4A0-UD-2400)
There is also the TS-873AeU which has 8 bays is very short and then the TS-1673AU-RP which if you read it is 16 bays and also redundant power. Note that I’m only talking about rack-mounted systems here. These correspond is a match for the Synology RP-2423+ (the decoder ring here is RP means Rack mounted, 24 means the maximum number of drives with an expansion chassis, so the base is 12, and 23 means launched in 2023 and the plus sign is the line), but is $3K on Amazon.
You can also go higher end with Ryzen 7 3700X with 8-cores at 3.4GHz like the TS-h1277XU-RP which has dual 10GBE and 12 drives. This is a brand-new model so worth thinking about, but it is expensive starting at $4K at Amazon.
At the very top end, you can get to higher end AMD Ryzen 5 7000 6-core with the TS-h1277AXU-RP-16G or even to Intel Xeon and 18 bay like the TS-h1887XU-RP, but now you are in $3K at Newegg (it’s $4K for the 16GB model at Amazon) which actually for 12 HDD and 6 SSDs bays is not a bad deal. And seriously for a Home server is way overkill but kind of cool to own.
Supermicro built systems
I had thought Supermicro only did boards, but it turns out they now make complete NAS systems with processors and all which is nice and you can just get them from Newegg.
A Review of a 10-year old build
Turns out I actually blogged the configuration I built in 2014 which was a pretty decent build that featured SAS drives as back then they were 12Gbps and had higher reliablity
- Norco RPC-4220. This was a 4U server box with five rows of four drives each and a single SFF-8087 connector and you use an adapter to get four SATA cables out of them to give you 20 hot-swappable drives. Today a similar 12-bay hot swappable is the Rosewill RSV-L4112U for $360. And there is a 2U version that looks a lot like what the pros are using which is an Athena Power RM-2U2123HE12 for $367 with a 12Gbps mini-SAS SFF8643 Backplan so you can put SAS drives in there if you want. . The Supermicro CS#-836BA-R920B 3U with 16 hot-swappable bays and a 920W redundant power supply is $1.2K alone. Also Supermicro has a huge line of complete servers like the SSG-620P-ACRH12H
- Supermicro X10-SRH. These have onboard SAS controllers and they are mini-SAS SF-8643, so I just needed a male SF-8643 to male SF-8087 to use them This was an ATX board with 8xSAS3 and 10 SATA connectors. What a beast and the nice thing about Supermicro
- Xeon E5-2600 v3. I don’t remember the exact model, but it was an LGA-2011-3 system and could support up to 256GB or Register ECC RDIM. Wow, not a bad build.
DIY with TrueNAS Scale Yikes!
The final option is to repeat the journey with the Supermicro and build a NAS myself. That is sort of appealing from a hobby point of view and it really wasn’t that hard back then. The main bleeding edge here is to go flash-based, so everything is an NVMe SSD. I can’t figure out why I need something this fast, but it’s fun to think about.
The bleeding edge includes things like the ASRock ALTRA-NAS that uses an Ampere Altra ARM chip with 96 cores for server virtualization (not sure why I need it but why not).
Or jump into the deep river that is the TrueNAS forums and all the problems with finding the right processor for this. The problem, in a nutshell, is that Intel is moving up the market and doesn’t want the low-end servers or desktops to cannibalize their high-end, so they withhold both ECC memory support and the number of PCIe lanes for I/O from their low-end systems.
So most people on the forums are buying old data center hardware off of eBay (which is a little scary I think) if you don’t know what you are doing. Most of the real hackers are buying cheap hardware that is one, two, or even three generations old to save money. But the current latest generation is the Supermicro X13SCL-F which supports the latest LGA1700 processors such as the Intel Core i5-14500. This runs DDR5-4800 ECC memory. Or on the AMD side, the H13SAE-MF.
So most folks are not using the latest technology. As an example, DDR5 is expensive, particularly in the ECC variety and you are not playing games with this stuff (which is why Atoms and Ryzen V1500s are being used in the commercial stuff).
So for example, the first thing is to decide how long ago you want to go. Most of the time they are recommending the Supermicro server boards which makes sense to me. They also recommend more cores which is why Xeon Gold’s are preferable to Core i3 if you have.a choice if you are running virtual machines and containers. I got confused with Intel’s extensive and overlapping product line, but here is what I’ve learned, there is a split between desktop and server processors. The main difference is the use of ECC RAM which you want on every machine, but most particularly server machines. It’s pretty ironic and sad that the state-of-the-art machine we built 10 years ago would still be a nice build today. I still remember seeing that server in the scrap heap and thinking I should just take it home. I should have. Same memory of me throwing away my Dad’s original Minolta SRT-101 camera. Shouldn’t have done that either (but I did buy two of the same model on eBay so I guess I made up for that).
From my time building that old machine, it’s ironic that the X10 board isn’t that out of date and, on the forums, recommending an X11 isn’t super crazy.
- LGA1200. C256 Chipset for the Xeon E-2300
- LGA1151. Skylake Xeon E3 v5/v6 Supermicro X11SSM or a Core i3-6100/7100
- LGA1151-2. Coffee Lake Xeon E2100/2200 or Core i3-8100/9100 an Supermicro X11SC
So there are many things to learn in terms of actually picking it. The main thing is that you can get away with a low-end consumer processor like the Core i3, but giving up ECC and that reliability is a little painful. Linus Torvald has said that dropping ECC from consumer machines is one of the big reasons for Linux crashing. And with server data, I don’t think you want that, so what’s a good build?
Well, ironically, reading the trade press is super confusing as they just cover one or two generations and the names are dizzying. Both the fabrication processes, the code names, the number of cores, and the crazy Intel naming, but here’s a quick Xeon decoder ring followed by some pricing studies on Newegg. This is assuming by the way that you don’t get a used server off of eBay. I’m not sure how you are even sure what you get, but there are some real bargains to be found.
On the other hand, if the Norco box is any indication, a file server you buy now is going to last well into 2035! So first looking at processors where Wikipedia appears to be the only sane source of historical information for systems with 1 and 2 sockets focused on 1-2 processor servers:
- 22nm Technology. Ivy Bridge (Xeon E3/E5-1xxxx/E5-2xxxx v2). September 2013. LGA1155 socket. This was the first time I started getting involved and where I decided to go for the next chip which was state of the art for a long time. In the day they had versions and E3 are for entry-level workstations and servers. The E5 was more powerful. And the first digit meant single or dual processor I think.
- 22nm Technology. Haswell-EP (E3/E5-16xx/E5-26xx v3). September 2014. LGA2011 v2 socket. Note that the names at the time used the same nomenclature but v2 was Ivy Bridge and Haswell was v3. The second digit indicates a server whereas a 12xx means a workstation chip.
- 14nm. Broadwell (E3/E5-1xxx/E5-2xxx v4). June 2015. The first 14nm chips.
- 14nm Skylake-S/H (E3-1xxx v5). October 2015. LGA 1151.
- 14nm Kaby Lake-S/H (E3-12xx v6). March 2017. LGA 1151
- 14nm Skylake SP (Bronze and Silver). June 2017. This is where things get confusing, they switch to Xeon Bronze and Siver for two sockets (the old E5-1xxx vs E5-2xxx). SP means scalable performance which means multiprocessor. Xeon Platinum means more than eight sockets. The numbering is confusing, but the old E3 line is now Xeon Gold 62xxU where U means uniprocessor. Xeon Bronze 32xx/42xx are also uniprocessors, but I can’t figure out Xeon Gold 52xx/62xx are dual or quad processors by some scheme.
- 14nm Cascade Lake-SP (Bronze/Silver/R./U). April 2019. A long wait here nearly two years and I still don’t get the names. As another low cost example, a Xeon Bronz 3204 isa 1.9GHz LGA 3647 85W part for $250 at Newegg.
- 14nm. Rocket Lake. This takes the same architecture as Ice Lake and backports it to the lower cost 14nm process. Uses the LGA1200. March 2021. It’s the cost reduced version so the Xeon E 23xx familiy. This is a good choice for lower cost NAS (as we will see). It needs the C256 chipset. So for example, the E2324G is a 3.1GHz part costing $270 at Newegg and the E-2334 is a 3.4GHz the last two digits drawing 65W and costing $300 at Newegg.
- 10nm. Ice Lake-SP/W. April 2021. These use Sunny Cove microarchitecture. These are sold as 10th Generation Core (but of course they are called Xeon so call me again confused). Xeon Gold 63xx/53xx, Xeon Silver 43xx are the grades.
- 10nm. Ice Lake-D. February 2022
- 7nm. Sapphire Rapids-SP/WS/HBM. January 2023
- 7nm. Emerald Rapids-SP. December 2023
Confused yet?
DIY #1: Rocket Lake 24-bay NAS
Net, net for a system with a new chip, the Rocket Lake looks like the sweet sport right now that you can get for about $2K. It has twice the bays of the solutions above.
- Intel Xeon E-2334 Rocket Lake 3.4 LGA 1200 65W. $300. Note this has an integrated GPU so you don’t have to buy a graphics card. The older Xeon’s didn’t have any graphics at all.
- Supermicro MBD-X12STL-F-O. $306 This is a C252/C256 Motherboard that supports up to 128GB unbuffered ECC DDR4-3200. And it has a dual M.2 right to board for the boot drive which is nice and a cache drive to. It has dual 1Gbe Ethernet, so you might want a 10Gbe card. And it supports six SATA III 6GBps so you will need a SAS adapter.
- Norco RPC-4224 4U Rackmount with 24 Hot-Swappable SATA/SAS Bays. This is the brother of the case I bought 10 years ago! Yes it cost $640, but it is a monster that can fit a full complement of drives with six SF-8087 mini SAS connectors so you will need a SAS controller to deal wth all those drives. It also fits an EED (12x13inch) and comfortabily fits a mATX (9.6×9.6″ board). It has all the case fans you will need.
The complexities of the right drive controller
The basic issue here is that LSI which makes most of these has a huge product line and there are hardware RAID and then simple JBOD controllers without the hardware. The best of these are called Host Bus Adapters (HBAs) which gives software like ZFS the most control. You want a Host Bus Adapter (HBA) rather than a RAID card for ZFS.
If you are not using ZFS, then you probably want a hardware RAID controller like this and you need two since there are 24 drives and you have to decide how you want to configure them:
LSI MegaRAID SAS LS100208 RAID Controller (if not using ZFS). $98. It is a good match with 4xSFF8087 mini-SAS internal connectors for the 16 drives. This is a hardware RAID system that supports RAID 10,50 and 60 since there are so many drives, you may want to stripe the RAID arrays or bypass all the hardware and use ZFS’s fault tolerance system. You can get two to support 32 drives or get the LSI MegaRAID 9260-8i which supports eight drives for $56. I’ll probably recommend just getting two matches of these, but you can save $40 this way. But note that if you are using ZFS, you probably just want a plain controller as all that hardware RAID is wasted because ZFS needs ECC, lots of Flash for the cache, and a Host Bus Adapter so it sees all the drives.
But since this is about using ZFS So looking at Newegg, there are several generations available so from lowest and older to the latest we have and also Serve the Home which has an HBA guide which unfortunately talked about chipsets and not boards:
- LSI 9200 older low-end line. LSI-9201-16i for $200 is a 4×4 drive is an older version from 2018 using 4x SFF-8087 SAS so you don’t need any adapters for the Norco rack and uses PCIe 2.0 8x.
- LSI 9300. Two versions match LSI 9300-81. $79 for a Host bus adapter running on PCIe 3.0. LSI-9300-16i for $99 is the 16-port version. These look like the lower-performance and older versions which is probably just fine for this case. This was launched in 2023, so not like it is old. At ind of pricing you might as well get a pair of LSI-9300-16i’s for $200
- LSI9305-16i for $220. or the LSI 9305-24i which is a rocking $300 but supports 24 drives via 6x SFF8643 connectors, and runs on PCIe 3.0 x8 on the processor side. so is a nice single-card solution but pretty expensive. IN fact a pair of LSI9305-16i’s are nearly the same price if you have room for it. LSI 9306-16i PCIe 3.0 with 4xSFF-8643 internal connectors for $222, support up to 16 drive. And yes, I don’t really understand why there is a 9305-16i and 9306-16i at basically the same price,
The current shipping Host Bus Adapters are really for high performance and are explained on the Broadcom site:
- LSI HBA 9500-16i. This is another newer system suing the SAS3816 8-lane host adapter that works on PCIe 4.0. So it’s not a great match to the older PCIe3 on the Supermicro board. This card isn’t on the Newegg site, but the LSI 9500-16e is for $1,250. YOu really need this is you are building fast SSD arrays I think.
- LSI eHBA 9600-24i. It is $400 on Newegg so really not a bad buy. This is the newest line that uses a new 4000 series I/O controller so I assume these are the latest family because it supports SAS 3.0 data rates of 22.5Gbps in addition to SAS2 at 12Gbps and SAS1/SATA at tBps. Need to check that it has ZFS support mainly. For the motherboard which is also PCIe 4.0 this is a nice match if we ever put SSDs into the 24-bay system. It does need an adapter from SFF8483 SAS 3.0 connectors to the older SFF8087 connectors, but it is all in a single card.
I’m not really clear about differences in performance in the LSI9300. vs 9305 but basically the 9300 was old in 2016! and the 9305 was new in 2016. The main different is that the LS9305 comes in 24 lanes per chip while the 9300 is 8 lines. They increase it by adding another chip with a PCIe switch so more power hungry. This also explains why the LSI9305-24 is twice as expensive, it is really two chips tied together with a switch. So more conveivnet. These two chips use the same drivers and work with FreeNAS/BlueNAS.
From Serve the Home, they have an order of LSI SAS HBA chips to look at (note that even though we are connecting SATA drives, the SAS controllers will work). The basic point is that using newer silicon is always a risk and staying with Broodcom (which bought LSI) is the safe choice as the drivers are the most mature. And for large arrays where you need at least 16 HBA ports (eg drives), you should stick with the SAS 3200 and 3000 families. I’m assuming the LSI 9306 is an even newer family
- SAS 3200. LIke the SAS 3224 or SAS 3216. These newer systems are using the SAS3200 generation and these are in the KLSI SAS 9305-16i and 9305-24i (so that foots with the notes above).
- SAS 3008/3016. these are the most cost effective and the sweet spot which are in the LSI SAS 9300-8i
- SAS 2308. Found in the LSI SAS 92078i
- SAS 2008. These are old and reliable and doesn’t handle SSDs that well. Not a big issues for us, but good to know for future-proofing. These are found in the LSI 9211-8i
The net is that future proof, the LSI 9600-24i looks pretty good and you will need lots of saturation, but no word on its compatibility to with TrueNAS and I’m going to check the hardware recommendation guide although this is dated 2021
Back to the rest of the components
- Kingston NV2 500G NVME. This is a PCIe4 drive that supports up to 3.5GBps and is just $88, so a good boot drive and a good
- Kingston Premier Series 16GB ECC KSM32ED8/16HD or Kingston KTL-TS432E/32G at $80 each. It is amazing how much RAM prices have dropped, but this is just $45/16GB, so you might as well get 64GB or splurge for the 32GB units to go to 128 GB. ZFS likes to stretch out.
- ThermalTake ToughPower PF3 1200W PS-TPD-1200FNFAPU-L for $220. I tend to overpay for power supplies because they can be a lot of trouble, but this one is not going wrong. You can probably get away with much less though since the processor is only 65W and each drive only draws 30W. This one is Platinum-rated rated so very efficient. The typical system is going to need 720W plus probably 200W for the System and you should discount this by 30% as the power supply ages so a 1200W supply is probably the minimum you want and a Gold supply is probably good enough so you could probably reduce the cost here if you want to
- Chelsio T402-CS 10Gbe. These are better than Intel which often has driver issues with FreeBSD. And it’s just $45.
Finally if you don’t want to put all that together, you can get a Supermicro built chassis as well:
- Supermicro SuperChassis CSE-836BA-R920B. This is a $1300 unit but it is only 3U with 20 drives and comes with redundant power suppliers. This is an alternative to the Norco and ThermalTake, just be aware taht it is really loud as most servers are and the power supply doesn’t have much in it.