OK, I couldn't help it, there was a 50% off sale on the OWC ThunderBay so I had to go out and get it. Basically, I got an enclosure for the same price as eight 12TB drives which is quite a deal. They shipped very promptly (sorry Dean that yours is segment faulting, I'll help you debug it). But the thing comes wrapped up really tight and here is what I learned:
- I bought a Thunderbolt cable thinking it didn't come with one, turns out it does, so you don't need to get one, but OWC is reputable and it is nice to get a cable. This is labeled a Thunderbolt 4 cable for what it is worth.
- There is a very concerning little flyer in the system that says the things you need to do for Windows setup, Macs with T2 security chips (these are late model Intel Macs about 2019 or later), and Apple Silicon Macs. Fortunately, I'm hooking this up to the generic 2017 Intel Mac running Sonoma and didn't have any problems.
- I normally want to use homebrew to install, but
brew install softraidpoints to an old version of SoftRaid XT version 6, so that doesn't work (yes I need to learn how to edit these brew scripts, but no time). It turns out that by default the drive is formatted as RAID 5 and has the SoftRaid XT software already there, so you just need to plug it into power, turn the power switch on and connect the Thunderbolt cable to your Mac.
- However it says that you are on a 14-day trial with the SoftRaid software, the good chat folks told me that the serial number for the SoftRaid is inside the disk array, but there are no instructions on how to open the box up. YOu do get keys, but turning it does nothing, so there is probably a latch, nonetheless, they had the code and this unlocked it.
Changing from RAID5 to RAID10 failing with "SoftRAID is blocked from accessing one or more drives" use Disk Utility?!
The default is to have an eight-drive array with RAID5 and actually, this is not a great option for 12TB drives, I really want RAID10 as the rebuild times are fast with a 96TB array, you get 84 usable terabytes, but with RAID10 it is down to 48TB, but you get very fast rebuild times and you don't risk a rebuild destroying your whole array.
The problem is that when I tried this with the SoftRaid XT application, I tried to first turn off Safeguard and then delete the volume, but it said "I do not have full disk access". So you have to go to Settings > Privacy & Security > Full Disk Access and turn this on for SoftRaid, but it still fails.
I suspect that something like BackBlaze or something else has the volume open, but I can't figure out what that is. But I was reading the SoftRaid forums and they mention using Disk Utility in conjunction with SoftRaid, so here is what I had to do:
- Start Disk Utility. Scroll down to the section that has the ThunderBay Drive.
- Click on the upper menu bar for "Unmount"
- Now go back to SoftRaid and you will see the drive unmount and now you can Erase from there
- Now go to the drives on the left pane that are free, select them all and choose New Volume
- Select RAID1+0
Format as HFS+ and not as APFS
This is a little unintuitive, but you should not use the new APFS file system when you are using Hard Disk Drives (it is fine with SSDs0. The problem is that APFS has a feature called copy-on-write so it becomes really slow. The slowdown happens although StackExchange mentions that you can turn on defragmentation to prevent this problem. The problem is explained at 3:20 by an OWC expert but, when you do a copy on write, you can have two file pointers that point to the same data extents so copies are very fast. But when you update the file, then when you read the modified file, you have to read the original data and then you have to then shift to read a different fragment, so you need two seeks every time:
With MacOS 10.14 Mojave, they are adding a disk defragmenter to do this. They should just have flagged HDDs.
But what about my Apple RAID 1 Partition, it also uses HFS+ with Hard Disks
Now I'm using SoftRaid for the ThunderBay, but Apple in its own Disk Utility as a RAID 1 option, well it turns out if you run
mount you can see the attributes of a RAID 1 are HFS and not APFS, so it does the right thing. In short with Hard Disks use HPS+ and APFS with SSDs:
/dev/disk13 on /Volumes/Rich's Apple RAID1 HDD (hfs, local, nodev, nosuid, journaled) /dev/disk19 on /Volumes/ThunderBay 8 (hfs, local, nodev, nosuid, journaled, noowners)
You can make this mistake manually with a Drobo and Disk Utility
I did however make this mistake with my old Drobos when I reformatted them the obsolete Drobo Dashboard didn't work so I just used Disk Utility and erroneously made them APFS, there is no way to fix this other than to backup them up and reformat them. My bad! YOu basically have to erase the APFS volume and recreate a new HFS volume and you will lose all your data.
Reported problems with Sonoma and Apple Silicon
My buddy Dean reports that he is getting segmentation faults and crashes, I first tried this on Intel Mac and will report back my luck with Venture and an M1 Mac.
Real World Performance from USB 2 to Thunderbolt 2/4 to Mac SSD to NAS
Okay, so how do these perform in real life, with Blackmagic Speedtest its pretty easy to find out, this is doing 5GB long reads and writes, so accurate for video and other large image format editing:
- USB 2 to DroboPro. This should be pretty slow, I have 8 drives in their RAID6 configuration, but the 480Mbps of USB 2 is going to slow things down quite a bit. The drives themselves are probably 200-400MBps drives, but with USB 2.0 this comes out to 30MBps slowing to 11MBps (120Mbps as USB 2 has lots of overhead I guess and there is probably a cache on the other side in the Drobo itself, but it's hard to tell)
- NAS over 1Gbps Ethernet. Interestingly, with a Synology NAS that has 200MBps drives in it in RAID10 configuration (see below, direct attached, you would get 500MBps-800MBps), but over the Ethernet, you get much less, more like 32MBps which makes some sense, a 1Gbps can support at most 100MBps, so that's a big limiting factor
- Thunderbolt 2 to RAID1. This is a 16TB x 2 drive set using an older Thunderbolt 2 interface with 20Gbps. Since these are hard drives running at maybe 200-300MBps, we are not going to saturate the link which can support 2GBps and in fact we see the performance of a single disk which is 223MBps write and 210MBps read, but you get redundancy
- Thunderbolt 4 to RAID5. This is the ThunderBay with RAID5, you are essentially going to get 7 drives worth of performance. Again at 40Gbps, you are not going to saturate that link. This is why Thunderbolt 4 docking stations are so good, it is really hard to use all that bandwidth, the main exception being if you have an SSD on the other side. And in fact, we get a gratifying 530MBps and 470MBps write and read. Again, this isn't too bad given that this is software RAID so the parity and things are being handled in the Mac itself.
- Thunderbolt 4 to RAID10. This should be a little faster since there is no parity drive and you are just streaming off in effect four disks depending on how the striping is done. And in fact, I get 800MBps write and 510MBps read. That's probably because there's a write cache somewhere in the drives I would guess.
- Intel Mac SSD. This is an early generation SSD system and doesn't have the amazing performance of Apple Silicon, but still, it is running at a healthy 388MBps write and reads are an amazing 2GBps. The writes are slow because this is only a 128GB disk and it is pretty fragmented. Unfortunately, the video system failed in 2016 but I would have liked to test that.