OK, I finally got a message I’ve been avoiding for two years. One of my disk arrays is finally at capacity. So time now in 2020 to think about what to do about home storage. Alot has changed in the last two years most notably:
- Disk sizes have grown incredibly. Two years ago, 10TB was really the state of the art, but now 16TB drives are out and are pretty reasonable. At $400 for a 16TB drive, if you are using home storage, they are the way to go, but as we will discuss later, this has a terrible interaction with RAID redundancy.
- Cloud backup and storage have gotten much better so that now with iCloud Storage, you get 2TB and with Google Drive for $10/month, you get unlimited storage. That plus the addition of Hyperbackup and Cloud Sync in home NAS’s, it’s gotten easier.
- As a real but confusing aside, there are two definitions of Terabyte. Technically, a TiB or Tebibyte is 1024^4 bytes whereas a TB is 1000^4 bytes. But in practice, computers folks use a TB to mean 1024 bytes as that was the traditional definition, so in this piece, I’ll do it the old way a TB is the real computer byte 1024 or 2^12.
So, net, net, why do, people still have data storage at home. The answer for most people is that they don’t need it anymore. For most ordinary folks, here is what I would suggest:
- If you are in Apple land, then spring for the $15/month Family plan. This gives you 2TB shared between up to five family members. Even taking a zillion photos, we have only used about 1TB of that so far. The beauty of this is that all your documents, videos and photos get automatically backed up and saved.
- If you are in Android land, I’m not quite sure what to recommend yet, because I don’t use it too much. The main issue with Google Photos is privacy. You need to look closely at the licensing agreements because they can use your photos for machine learning etc. But if you don’t care about that, then getting a Gsuite account makes some sense and then you are in unlimited storage land again.
- If you are in Windows land, then Google makes some sense as well because it is really cross-platform and works with Android. Of course, there is OneDrive, but that doesn’t handle either Apple or Android phones.
But for some of us, we have significant content that we generate which is not from a phone. For me, that’s photographs taken with dSLRs and videos from old cameras and the like. What should you do about that? Well, that’s a bit more of a problem because, with modern 4K video and 40MP cameras, it really adds up fast. Right now we have 5TB of stills and close to 30TB of videos that need to go somewhere. Just uploading it isn’t really an issue.
First, there are the Comcast limits on monthly uploads of 1TB per month and then there is the fact that you do want to edit and print and other things. All of this leads to the need for some local storage that is not trivial.
Here are the recommendations:
- Get a NAS. This means network-attached storage. I normally get the Synology boxes, but QNAP is good as well. Synology has done an awesome job stretching from simple to use a web interface to have all the real Linux tools like ssh and rsync for power users. Right now we have a DS1812+ and a DS2413+ at home. This is a 8-drive system and a 12-drive system. They plug into your ethernet and are decently fast. The DS2413+ even allows bonding, so you can take two gigabit ethernet ports and provide 2Gb of bandwidth for multiple clients. In the real world, these are not big barriers. The disks themselves are the issues.
- Enable scrubbing. This is a real problem. Bit rot is a real problem in these systems. I’ve lost quite a few JPEGs because the hard disks have errors and with JPEGs, even a single bit flip destroys the image. So you want to have both bit scrubbing where you go through files and make sure there are no flips. Btrfs allows this with Synology and their software RAID alllows this too.
- Backup to the cloud. There are now some really great tools for this. The cheapest by far is to get unlimited Google Drive and then use Hyperbackup to push blocks up there. I used to use a dedicated cloud backup tool called Crashplan, but this works pretty much just as well. And of course, you really want at least two cloud backups for redundancy, so I actually backup to Google Drive and I’m looking for another place to put things. Amazon Glacier which is offline storage looks like a good choice
- Local Backups. If you are doing this, you also want a local backup as well. For this, in 2019, I had two NASes and they backed up to two Drobos. These are older technology storage systems that just attach to a PC. It is important to have different technology stacks when you are doing backup. so one bug doesn’t take everything down.
Avoiding undetectable bit errors with RAID10
You want to have some redundancy in the NAS drives themselves and the standard choice used to be to have a parity drive. Traditionally, you would have one or two drives dedicated to so called parity. This is called RAID5 for one drive fault tolerance and RAID6 for two drives. Synology has their own flavor called SHR and SHR2 respectively. Drobo has a similar technology. This allows heterogeneous drives so you can just throw a bunch of different drives of different sizes in.
However, there is a big problem with this, if you do lose a drive in say an eight drive array, you have to read all seven drives completely to rebuild the 8th. The problem is that these drives have a Bit Error Rate (BER) of 10^16, and the drives themselves are huge. With a 16TB drive, you are basically very likely to have a hard bit error when you are rebuilding. Net, net, for large modern drives, SHR and SHR2 are actually bad ideas.
The problem is that consumer drives have unrecoverable error rates of 10^14 which sounds like a lot. Even in enterprise systems, this is a big problem with higher quality disks are 10x better at 10^15.
It’s one reason I buy enterprise drives which are 10^15 and which have five year warranties. They cost more, but are 10x more reliable.
I’ve actually had quite a few JPEGs die for this reason, since they are vulnerable to a single bit flip. Why is that? Well, if you read an entire 16TB drive that is 16×10^12 = 1.6 x 10^13 (note these are not computer TB, but disk ones), then you have a 1 in 10 chance of an unrecoverable bit error with a consumer drive that has one every 10^14.
When you are rebuilding a RAID array, you have to read all the drives to rebuild a single drive, so say in an 12 drive array with 16TB drives, when you rebuild the array, you have to do a complete read of 11 drives of 16TB or 176e12 or 2^14. so you have a decent chance of a bad read like 20% of a bad read.
I’ve actually had this happen where you lose one drive, then in the course of rebuilding, you lose another one. What’s the solution? Well, you move to something called RAID10. This means that each drive is mirrored. This is less efficient than RAID5 or RAID6, but the rebuild time is very fast, you just copy from one to the other. You use 50% of your storage for backup. with RAID6 for example in an 8-drive system, 2 are for redundancy, so it is more efficient, but crashes.
The gotcha you cannot just add more drives
The next complexity is what array architecture with RAID5, RAID6, SHR and SHR2, it is trivial to expand the array, you just add another drive and it reorganizes automatically.
But with RAID10, you can’t do that, you can just add more drives. Net, net, you have to do way more thinking about how you want it to work. So the convenience of moving drives around disappears with the very large drives. With 4TB drives for instance 4e12, you have way less of a problem than with 16e12 drives with the same 1e15 enterprise drives.
How to layout data
So let’s say you have as an example of how to think of a dataset that has say in order of most valuable to least, for the most valuable, I’d recommend having three on-site live versions, having two sets of disks that you just keep offline and three online storage sets.
- Photos and Home Movies. 5TB worth of all those scanned and other images.
- Documents. 1TB worth. These are legal and other things, but they are way smaller.
- Work files. 5TB That is stuff that you don’t really want to lose.
Then there are things that you can recover. For instance music from CDs, or DVDS. In this case, just a single backup is probably enough and you probably don’t even need offsite if you don’t want it.
- Movies that you can recover from source. 14TB.
- TV Shows and other random videos. 7TB
Then let’s say you have the following hardware. Note that you want to make sure that you have systems from different vendors, so one software bug doesn’t wipe them all out. This is sort of a typical home system, where there is stuff from multiple generations and you use the latest one for online and the rest for backups so as an example:
- Six drives in a 12-drive Synology system. Keep this one on all the time and it is large enough to store everything. Have six drives in a 16TB RAID10 array.
- Synology the other six drives in 10TB RAID10. Splitting the drive set in half makes some sense, because you can as drive technology improves (22TB is coming, you can “leap” frog, so take the 10TB drives out and use as an archive and replace with 22TB when you need it).
- Synology 8-drive. Use this for the high value files in a 6TB SHR2
Then we have a set of near line machines that are normally off, but we use for weekly backups:
- Drobo 8-drive. Use a backup for lower value files in a 4TB two drive redundancy. These are smaller drive and they are limited to 32TB total anyway, so using RAID6 is ok here.
- Drobo 4-drive. Also for high value files in 8TB x 4 with one redundant drive.
With this kind of scheme you can see that, you have older family of drives in each. That’s because older drives tend to have maximums, so you basically migrate disks down.
Buying new drives
In that sense, you are feeding down, so you buy drives at the lower price per byte but with the longest future life. I tend to favor Seagate enterprise drives for this, used to like Hitachi because drive failures are a real problem for a home. Net, net, I don’t buy drives with less than 10e15 errors and less than a five year warranty. So current pricing looks like. I normally favor the Seagate Enterprise drives, but the WD Red and the Seagate IronWolfs get decent reviews too:
- For budget uses, the Seagate IronWolf 4TB ST4000VN0008. Not a bad choice at $102 for a replacement drive. The Seagate Enterprise Capacity ST4000NM0015 at $150 or $38/TB isn’t that great a buy. The IronWolf Pro is 7200 rpm at $141 is nearly the same as the Enterprise Capacity so not a good buy.
- Seagate Exos Enterprise Capacity 6TB is $199 or $33/TB. If you want a cheaper drive then the Seagate IronWolf ST6000VN0033. $162. So for right now if you have an older system, get 6TB and not 4TB, they are much cheaper.
- Seagate IronWolf Pro 8TB at $270. This is actually more expensive than the 10TB, so get 10TB now for 8 or 10TB use.
- Seagate Enterprise Capacity 10TB ST10000NM00016. At $253, they are $25/TB, this is right now in the sweet spot of pricing.
- Seagate Exos 16TB Enterprise. $400. Actually these are a surprisingly good price on a cost per TB basis at $25/TB.
A note on terminology
The Synology systems have a very specific nomenclature for their storage hierarchy:
- Storage pool. This means a collection of drives, you can have multiple pools per hardware unit. This let’s you easily divide a 12 or 8-bay system into multiple logical systems. This let’s you split systems and is really useful in things like RAID, where you can’t add drives arbitrarily. At this level, you decide the fault tolerance and as previously discussed, that is going to be RAID10 for 10TB and larger drives while Synology Hybrid Raid (SHR2) makes sense for 8 drivers and SHR1 for 4 drives. That is one drive redundancy when using 8TB and smaller drives.
- Volumes. You can have multiple volumes on a storage pool and this is where the file system is defined. ext4 is the most compatible and the default, but you want btrfs and it makes backups much easier.
- Shared Folders. Finally on top of volumes, you can folders where the files live. This level is pretty important because it is the unit of storage. I like to keep files of the same type, like all
movies
in the same shared folder. This matters quite a bit when you are doing replication because shared folders are the unit of snapshots and snapshot replication in btrfs. They are also what youmount
on your desktop, so if you have too many them it’s really a pain. But with too few, on smaller drive arrays, you will have a they don’t fit problem. I try to keep shared folders sizes in less than 16TB since on many machines (old Drobo’s) they have limits.
For Drobo, they use a different hierarchy and their older devices (ours are 10 years old!) have some limits and is much simpler to manage:
- Single Storage Pool. There is no concept of Storage Pool with Drobo, all the disks in a Drobo are in a single storage pool, so you select 1-disk or 2-disk redundancy for all the drives. For the older Drobo’s, there is a 32TB total limit per system. So a 8-drive array will waste space if the total disk space is bigger than 32TB (that is 8 x 4TB) and a 4-drive array is limited to 4 x 8TB Disks.
- Volumes. They only allow two volumes in system and these must be less than 16TB in use.
- Shared Folders are volumes. They don’t have this level, a volume is the share unit.
The implication for this is that for Synology, you want relatively more shared folders and then you have to “multiplex” them onto the smaller number of volumes in a Drobo. Specifically with two Drobos, you can have more than 4 volumes.
So an ideal mapping might look like something that uses four shared folders where each is less than 16TB to fit into the Drobo’s for long term backup:
- Documents, Photos and Home Movies. Since these go together. 5TB+1TB = 6TB
- Software. 5TB
- Movies. 13TB. This is actually pretty close to the 16TB limit now
- TV Shows and Other Movies 7TB
Backup strategies
OK this is maybe the most complicated of it, but there are four very different schemes for backup:
- Btrfs Snapshots. This is a cool new feature of the Linux system in Synology machines. Basically, you can take a complete snapshot of the file system every hour and only the deltas are kept. It is a B-tree file system (hence the name), so if most things are the same, it just points to the same files, but when something changes, then a new data block is created. This make incremental backup and restore very fast.
- Snapshot replication. A side effect of this feature is that you can very quickly over the network to another btrfs partition and run a backup to it. It uses the same btrfs trick to make incrementals very fast.
- Rsync synchronization. This is an older technology that let’s you copy for ext4 onto btrfs. Then you can get rid of ext4.
- Goodsync. This is a paid utility, but with Drobo’s attached to the a MacBook, it gives the same nice graphical user interface that synology has. While you can use the command line for this, the chances of making a mistake are higher. It’s pretty slow though.
- Hyperbackup. This does incremental block by block backup to cloud services. It does client side encryption, so you don’t have to worry about Google reading your stuff. The right strategy is to use Google Drive for near line and then something really cheap like AWS Glacier for long term backups. In the old days I would have used a dedicated backup service like Crashplan, but with AWS and Hyperbackup, it doesn’t seem necessary.