Ok with 10TB hard disks and 10^16 UBER, you really can’t build raid arrays anymore, because a complete read of a 10TB disk is very likely to have an unrecoverable bit error. So what’s a person to do?
Well two strategies for a 12 disk array (like the Synology DS-1512):

  1. Install RAID-1 SSDs, this is because SSDs are 10^17 and much more reliable. If you buy a 1TB set, most of the data you will use will live on these SSDs.
  2. For the remaining 10 drives, you have a few choices focused on a RAID10 set. RAID10 seems like it is not as good as RAID6 or SHR2 because it only has one disk worth of fault tolerance, .Since the RAID-1 means mirroring however, it is much less likely to generate a UBER. As an aside, we really need something that mirrors to three disks to get two disks of fault tolerance, but that’s not a thing πŸ™‚
  3. So with the 10 remaining drives, you are much more limited because the drives now have to match exactly. One of the nice things about SHR2 (Synology Hybrid RAID) is that you don’t need need disks of the same side.
  4. With your random disks that are less than 6TB, you can keep using SHR, the odds of failure are 1/2 that of the 10TB disks.
  5. Then for your new 10TB disks, create a RAID10 array. Note that with RAID10 you also can’t extend this disk array either, so you need to create it in place. Then when you want to expand, you take the random disks and make a neat RAID10 array when you run out of storage.
  6. As an aside, you want to get 4K native for the new 10TB and then 512e for the others assuming that SHR2 is going to be used with bunches of old disks. Since Synology requires you to have 4Kn drives separate from 512e this should work nicely

So here is one disk layout.

  1. 2 x 1TB SSDs. These are the caching drives
  2. 4 x 10TB hard disks 4Kn. These are the “new” store, set them up as RAID10, so you have really two sets of mirrors, but you stripe the data off of two drives at once. This will give you 20TB of storage.
  3. 6 x random disks 512e in SHR2. If you have a collection of 3TB, 4TB and 6TB drives, you can gradually expand this with drives up to 6TB until you get a full 6x6TB with 2 parity so effectively 24TB of data.

Then when you run out of 54TB of storage (yes, this will happen :-), you can convert the 6 slots into 10TB x 6 in RAID10 to reach a total capacity of 2x10TB = 50TB with a good chance of recovery if the 10TB fails. When the 12TB and 14TB come out, you can buy them in groups of 4x and 6x and replace them.
For an 8-disk array, it is a little trickier:

  1. The easiest thing to do is to stick with 6TB 512e drives to get 8x6TB eventually with effectively SHR2 giving you 6x6TB or 36TB
  2. If you need more, then you use the 2 x 1TB SSDs leaving 6 drives in RAID10 at 10TB given you effectly 30TB, you can see in this configuration things will actually be faster, but in this case you will have less storage than with SHR2 and smaller drives. This is because of the bit error problem on restores.


I’m Rich & Co.

Welcome to Tongfamily, our cozy corner of the internet dedicated to all things technology and interesting. Here, we invite you to join us on a journey of tips, tricks, and traps. Let’s get geeky!

Let’s connect