Well, we finally got our file server running and with three SAS drives to practice with, it’s time to learn how to use ZFS. For convenience we are using Ubuntu and there are some handy instructions for installing it on Trusty Tahr (14.04) annotated with notes from arstechnica.com
The instructions are pretty easy for installation:

# get add-apt-repository
sudo apt-get install software-properties-common
# get the zfs library
sudo add-apt-repository ppa:zfs-native/stable
sudo apt-get update
sudo apt-get install -y ubuntu-zfs
# install now
modprobe zfs
# and for subsequent reboots
sudo tee -a /etc/modules <<<“zfs”
# check to see that it is installed
lsmod | grep zfs
# see what disks I have
lsblk
# create a mirror disk set named zfs1 with two drives at sdc and sed
# raidz1 means 1 parity drive (aka raid 5, so resistant to a single drive failure)
# raidz2 is 2 parity drives (aka raid 6, resistent to two failures)
# raidz3 is 3 parity driver (no such defined thing but resistant to three failures)
# shift means use 2^12 block sizes rather than the 512 byte defaults
sudo zpool create -o ashift=12 zfs1 raidz2 sdc sdd
# raw capacity
sudo zpool list
# capacity after format and parity drives
sudo zfs list
# create a file system called users on zfs1
sudo zfs create zfs1/users
# make it shareable by samba
sudo zfs set sharesmb=on zfs1/users


### Trap: ARC maximum memory

There is a big trap here in that ZFS on Linux will chew up available memory, so you need to limit it’s cache size, to typically half total system memory by creating a /etc/modprobe/zfs.conf file

# /etc/modprobe.d/zfs.conf
# yes you really DO have to specify zfs_arc_max IN BYTES ONLY!
# 16GB=17179869184, 8GB=8589934592, 4GB=4294967296, 2GB=2147483648, 1GB=1073741824, 500MB=536870912, 250MB=268435456
#
options zfs zfs_arc_max=858993459


### Trap: vdevs are immutable

Well this is an even bigger problem. A typically drive (a device in Linux speak) can be formed into a larger Virtual DEVice. This is a raid partition typically. The problem is that one you create a RAID partition, you can’t add drives to it effectively or easily. So if you say have three 4TB drives in a vdev, if you run out of space you can’t just add a new drive.

### Trap: zpools fill up last added drives fast

A zpool is a basically a striped array (RAID0) and this let’s you add multiple vdevs or raw devices, but when you do, it fills the remaining free space at the same percentage rate (So if you’ve 1TB left on one drive and 10TB on another, then it will write 10x more data to to the last drive).

### Tip: name the drives by the physical labels on them

Ars Technica. I didn’t know this but you can find those kinds of labels with ls -l /dev/disk/by-id which shows you the names of the disk by

by their wwn ID, by their model and serial number as connected to the ATA bus, or by their model and serial number as connected to the (virtual, in this case) SCSI bus.

### Tip: Create lots of filesystems because you can compress and grow and shrink in a single line

In ZFS, a file system looks like a folder, so create it with the syntax sudo zfs create zfs1/images and then you can set properties on that entire file system. You can’t do that with folders you create within a file system.

# compress documents
sudo zfs set compression=on zfs1/documents
# change the file system side is so easy
sudo zfs set quota=200G zfs1/documents
# you can resize a file system just like this
sudo zfs set quota=1T zfs1/documents


### Tricks: Snapshot and backup your file system

I can’t believe how simple it is to make a backup. ZFS uses a copy-on-write scheme. So when you snapshot, it keeps the old disk blocks when you create new ones. Cool. The syntax is a little weird but it is basically vdev/filesystem@snapsnot name

# make a snapshot
sudo zfs snapshot zfs1/documents@snapped-2014-12-11
# list all your snapshots
sudo zfs list zfs1/documents
# recover to that snapshot whenever you want
sudo umount zfs1/documents@snapped-2014-12-11
sudo zfs rollback zfs1/documents@snapped-2014-12-11


### Tricks: Replication of ZFS to another machine

OK, this is pretty cool. If you can ssh into another machine, then you can send all your changes from one system to another

# You just send changes over an ssh tunnel and as many snapshots as you want to backup-server
sudo zfs send zfs1/documents@2014-12-11 | ssh backup-server zfs receive